aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1507.08234 | 2952042755 | We present two novel models of document coherence and their application to information retrieval (IR). Both models approximate document coherence using discourse entities, e.g. the subject or object of a sentence. Our first model views text as a Markov process generating sequences of discourse entities (entity n-grams); we use the entropy of these entity n-grams to approximate the rate at which new information appears in text, reasoning that as more new words appear, the topic increasingly drifts and text coherence decreases. Our second model extends the work of Guinaudeau & Strube [28] that represents text as a graph of discourse entities, linked by different relations, such as their distance or adjacency in text. We use several graph topology metrics to approximate different aspects of the discourse flow that can indicate coherence, such as the average clustering or betweenness of discourse entities in text. Experiments with several instantiations of these models show that: (i) our models perform on a par with two other well-known models of text coherence even without any parameter tuning, and (ii) reranking retrieval results according to their coherence scores gives notable performance gains, confirming a relation between document coherence and relevance. This work contributes two novel models of document coherence, the application of which to IR complements recent work in the integration of document cohesiveness or comprehensibility to ranking [5, 56]. | @cite_35 also present a model of text comprehensibility using lexical features, such as bag of words, word length and sentence length, to approximate semantic and syntactic complexity. They build a classifier that uses these features to assign a comprehensibility score to each document, and then rerank retrieved documents according to this comprehensibility score. We also rerank results using our coherence scores, and like @cite_35 , find this to be effective. The difference of our work to that of is that (i) our coherence scores are not produced by a classifier but they are computed either as entropy or graph centrality approximations without tuning parameters; and (ii) we do not use lexical frequency statistics, but solely discourse entities, e.g. subject, object, which we consider better approximations of semantic complexity than lexical frequency features. | {
"cite_N": [
"@cite_35"
],
"mid": [
"1974595223"
],
"abstract": [
"Imagine a physician and a patient doing a search on antibiotic resistance. Or a chess amateur and a grandmaster conducting a search on Alekhine's Defence. Although the topic is the same, arguably the two users in each case will satisfy their information needs with very different texts. Yet today search engines mostly adopt the one-size-fits-all solution, where personalization is restricted to topical preference. We found that users do not uniformly prefer simple texts, and that the text comprehensibility level should match the user's level of preparedness. Consequently, we propose to model the comprehensibility of texts as well as the users' reading proficiency in order to better explain how different users choose content for further exploration. We also model topic-specific reading proficiency, which allows us to better explain why a physician might choose to read sophisticated medical articles yet simple descriptions of SLR cameras. We explore different ways to build user profiles, and use collaborative filtering techniques to overcome data sparsity. We conducted experiments on large-scale datasets from a major Web search engine and a community question answering forum. Our findings confirm that explicitly modeling text comprehensibility can significantly improve content ranking (search results or answers, respectively)."
]
} |
1507.08120 | 1622860223 | In this paper, we investigate recommender systems from a network perspective and investigate recommendation networks, where nodes are items (e.g., movies) and edges are constructed from top-N recommendations (e.g., related movies). In particular, we focus on evaluating the reachability and navigability of recommendation networks and investigate the following questions: (i) How well do recommendation networks support navigation and exploratory search? (ii) What is the influence of parameters, in particular different recommendation algorithms and the number of recommendations shown, on reachability and navigability? and (iii) How can reachability and navigability be improved in these networks? We tackle these questions by first evaluating the reachability of recommendation networks by investigating their structural properties. Second, we evaluate navigability by simulating three different models of information seeking scenarios. We find that with standard algorithms, recommender systems are not well suited to navigation and exploration and propose methods to modify recommendations to improve this. Our work extends from one-click-based evaluations of recommender systems towards multi-click analysis (i.e., sequences of dependent clicks) and presents a general, comprehensive approach to evaluating navigability of arbitrary recommendation networks. | Network Analysis Ever since Milgram's small world experiments @cite_3 , researchers have been making efforts to understand and in particular in networks. Kleinberg @cite_11 @cite_28 and Watts @cite_15 formalized the property that a navigable network requires short paths between all (or almost all) nodes @cite_22 . Formally, such a network has a low diameter bounded by a polynomial in @math , where @math is the number of nodes in the network, and a giant component containing almost all the nodes exists @cite_22 . In other words, because the majority of network nodes are connected, it is possible to reach all or almost all of the nodes, given global knowledge of the network. This property is referred to as . The low diameter and the existence of a giant component constitute necessary topological conditions for network navigability. In this paper we apply a set of standard network-theoretic measures including distribution of component sizes and component structure (via the bow tie model) to assess if a network satisfies them. | {
"cite_N": [
"@cite_22",
"@cite_28",
"@cite_3",
"@cite_15",
"@cite_11"
],
"mid": [
"2164195254",
"1572272766",
"1573579329",
"2070497706",
"2128678576"
],
"abstract": [
"The problem of searching for information in networks like the World Wide Web can be approached in a variety of ways, ranging from centralized indexing schemes to decentralized mechanisms that navigate the underlying network without knowledge of its global structure. The decentralized approach appears in a variety of settings: in the behavior of users browsing the Web by following hyperlinks; in the design of focused crawlers [4, 5, 8] and other agents that explore the Web’s links to gather information; and in the search protocols underlying decentralized peer-to-peer systems such as Gnutella [10], Freenet [7], and recent research prototypes [21, 22, 23], through which users can share resources without a central server. In recent work, we have been investigating the problem of decentralized search in large information networks [14, 15]. Our initial motivation was an experiment that dealt directly with the search problem in a decidedly pre-Internet context: Stanley Milgram’s famous study of the small-world phenomenon [16, 17]. Milgram was seeking to determine whether most pairs of people in society were linked by short chains of acquaintances, and for this purpose he recruited individuals to try forwarding a letter to a designated “target” through people they knew on a first-name basis. The starting individuals were given basic information about the target — his name, address, occupation, and a few other personal details — and had to choose a single acquaintance to send the letter to, with goal of reaching the target as quickly as possible; subsequent recipients followed the same procedure, and the chain closed in on its destination. Of the chains that completed, the median number of steps required was six — a result that has since entered popular culture as the “six degrees of separation” principle [11]. Milgram’s experiment contains two striking discoveries — that short chains are pervasive, and that people are able to find them. This latter point is concerned precisely with a type of decentralized navigation in a social network, consisting of people as nodes and links joining",
"It is easier to find short chains between points in some networks than others. The small-world phenomenon — the principle that most of us are linked by short chains of acquaintances — was first investigated as a question in sociology1,2 and is a feature of a range of networks arising in nature and technology3,4,5. Experimental study of the phenomenon1 revealed that it has two fundamental components: first, such short chains are ubiquitous, and second, individuals operating with purely local information are very adept at finding these chains. The first issue has been analysed2,3,4, and here I investigate the second by modelling how individuals can find short chains in a large social network.",
"",
"Social networks have the surprising property of being “searchable”: Ordinary people are capable of directing messages through their network of acquaintances to reach a specific but distant target person in only a few steps. We present a model that offers an explanation of social network searchability in terms of recognizable personal identities: sets of characteristics measured along a number of social dimensions. Our model defines a class of searchable networks and a method for searching them that may be applicable to many network search problems, including the location of data files in peer-to-peer networks, pages on the World Wide Web, and information in distributed databases.",
"Long a matter of folklore, the small-world phenomenon'''' --the principle that we are all linked by short chains of acquaintances --was inaugurated as an area of experimental study in the social sciences through the pioneering work of Stanley Milgram in the 1960''s. This work was among the first to make the phenomenon quantitative, allowing people to speak of the six degrees of separation'''' between any two people in the United States. Since then, a number of network models have been proposed as frameworks in which to study the problem analytically. One of the most refined of these models was formulated in recent work of Watts and Strogatz; their framework provided compelling evidence that the small-world phenomenon is pervasive in a range of networks arising in nature and technology, and a fundamental ingredient in the evolution of the World Wide Web. But existing models are insufficient to explain the striking algorithmic component of Milgram''s original findings: that individuals using local information are collectively very effective at actually constructing short paths between two points in a social network. Although recently proposed network models are rich in short paths, we prove that no decentralized algorithm, operating with local information only, can construct short paths in these networks with non-negligible probability. We then define an infinite family of network models that naturally generalizes the Watts-Strogatz model, and show that for one of these models, there is a decentralized algorithm capable of finding short paths with high probability. More generally, we provide a strong characterization of this family of network models, showing that there is in fact a unique model within the family for which decentralized algorithms are effective."
]
} |
1507.08120 | 1622860223 | In this paper, we investigate recommender systems from a network perspective and investigate recommendation networks, where nodes are items (e.g., movies) and edges are constructed from top-N recommendations (e.g., related movies). In particular, we focus on evaluating the reachability and navigability of recommendation networks and investigate the following questions: (i) How well do recommendation networks support navigation and exploratory search? (ii) What is the influence of parameters, in particular different recommendation algorithms and the number of recommendations shown, on reachability and navigability? and (iii) How can reachability and navigability be improved in these networks? We tackle these questions by first evaluating the reachability of recommendation networks by investigating their structural properties. Second, we evaluate navigability by simulating three different models of information seeking scenarios. We find that with standard algorithms, recommender systems are not well suited to navigation and exploration and propose methods to modify recommendations to improve this. Our work extends from one-click-based evaluations of recommender systems towards multi-click analysis (i.e., sequences of dependent clicks) and presents a general, comprehensive approach to evaluating navigability of arbitrary recommendation networks. | Kleinberg also found that an network possesses certain structural properties that make it possible to design efficient local search algorithms (i.e., algorithms that only have local knowledge of the network) @cite_22 @cite_28 . The delivery time (the expected number of steps to reach an arbitrary target node) of such algorithms is then sub-linear in @math . In this paper, we investigate the efficient navigability of networks through the simulation of a range of search and navigation models. | {
"cite_N": [
"@cite_28",
"@cite_22"
],
"mid": [
"1572272766",
"2164195254"
],
"abstract": [
"It is easier to find short chains between points in some networks than others. The small-world phenomenon — the principle that most of us are linked by short chains of acquaintances — was first investigated as a question in sociology1,2 and is a feature of a range of networks arising in nature and technology3,4,5. Experimental study of the phenomenon1 revealed that it has two fundamental components: first, such short chains are ubiquitous, and second, individuals operating with purely local information are very adept at finding these chains. The first issue has been analysed2,3,4, and here I investigate the second by modelling how individuals can find short chains in a large social network.",
"The problem of searching for information in networks like the World Wide Web can be approached in a variety of ways, ranging from centralized indexing schemes to decentralized mechanisms that navigate the underlying network without knowledge of its global structure. The decentralized approach appears in a variety of settings: in the behavior of users browsing the Web by following hyperlinks; in the design of focused crawlers [4, 5, 8] and other agents that explore the Web’s links to gather information; and in the search protocols underlying decentralized peer-to-peer systems such as Gnutella [10], Freenet [7], and recent research prototypes [21, 22, 23], through which users can share resources without a central server. In recent work, we have been investigating the problem of decentralized search in large information networks [14, 15]. Our initial motivation was an experiment that dealt directly with the search problem in a decidedly pre-Internet context: Stanley Milgram’s famous study of the small-world phenomenon [16, 17]. Milgram was seeking to determine whether most pairs of people in society were linked by short chains of acquaintances, and for this purpose he recruited individuals to try forwarding a letter to a designated “target” through people they knew on a first-name basis. The starting individuals were given basic information about the target — his name, address, occupation, and a few other personal details — and had to choose a single acquaintance to send the letter to, with goal of reaching the target as quickly as possible; subsequent recipients followed the same procedure, and the chain closed in on its destination. Of the chains that completed, the median number of steps required was six — a result that has since entered popular culture as the “six degrees of separation” principle [11]. Milgram’s experiment contains two striking discoveries — that short chains are pervasive, and that people are able to find them. This latter point is concerned precisely with a type of decentralized navigation in a social network, consisting of people as nodes and links joining"
]
} |
1507.08120 | 1622860223 | In this paper, we investigate recommender systems from a network perspective and investigate recommendation networks, where nodes are items (e.g., movies) and edges are constructed from top-N recommendations (e.g., related movies). In particular, we focus on evaluating the reachability and navigability of recommendation networks and investigate the following questions: (i) How well do recommendation networks support navigation and exploratory search? (ii) What is the influence of parameters, in particular different recommendation algorithms and the number of recommendations shown, on reachability and navigability? and (iii) How can reachability and navigability be improved in these networks? We tackle these questions by first evaluating the reachability of recommendation networks by investigating their structural properties. Second, we evaluate navigability by simulating three different models of information seeking scenarios. We find that with standard algorithms, recommender systems are not well suited to navigation and exploration and propose methods to modify recommendations to improve this. Our work extends from one-click-based evaluations of recommender systems towards multi-click analysis (i.e., sequences of dependent clicks) and presents a general, comprehensive approach to evaluating navigability of arbitrary recommendation networks. | Recommender Systems Initially, recommender systems were mostly evaluated in terms of recommendation accuracy. More recently, the importance of evaluation metrics beyond accuracy has been identified @cite_0 @cite_29 . Approaches such as diversity (e.g., @cite_1 ), novelty (e.g., @cite_5 ), and serendipity, which are thought to be orthogonal to the traditional accuracy-based evaluation measures, have been found to increase user satisfaction @cite_31 . Recommender systems have been found to show a filter bubble effect (even though following recommendations actually the effect) @cite_30 . Diversification of recommendations can be an effective means of increasing the spectrum of recommendations users are exposed to. | {
"cite_N": [
"@cite_30",
"@cite_29",
"@cite_1",
"@cite_0",
"@cite_5",
"@cite_31"
],
"mid": [
"2050125880",
"",
"2103186657",
"1971040550",
"2056564184",
"2155912844"
],
"abstract": [
"Eli Pariser coined the term 'filter bubble' to describe the potential for online personalization to effectively isolate people from a diversity of viewpoints or content. Online recommender systems - built on algorithms that attempt to predict which items users will most enjoy consuming - are one family of technologies that potentially suffers from this effect. Because recommender systems have become so prevalent, it is important to investigate their impact on users in these terms. This paper examines the longitudinal impacts of a collaborative filtering-based recommender system on users. To the best of our knowledge, it is the first paper to measure the filter bubble effect in terms of content diversity at the individual level. We contribute a novel metric to measure content diversity based on information encoded in user-generated tags, and we present a new set of methods to examine the temporal effect of recommender systems on the user experience. We do find that recommender systems expose users to a slightly narrowing set of items over time. However, we also see evidence that users who actually consume the items recommended to them experience lessened narrowing effects and rate items more positively.",
"",
"This paper considers a popular class of recommender systems that are based on Collaborative Filtering (CF) and proposes a novel technique for diversifying the recommendations that they give to users. Items are clustered based on a unique notion of priority-medoids that provides a natural balance between the need to present highly ranked items vs. highly diverse ones. Our solution estimates items diversity by comparing the rankings that different users gave to the items, thereby enabling diversification even in common scenarios where no semantic information on the items is available. It also provides a natural zoom-in mechanism to focus on items (clusters) of interest and recommending diversified similar items. We present DiRec a plug-in that implements the above concepts and allows CF Recommender systems to diversify their recommendations. We illustrate the operation of DiRec in the context of a movie recommendation system and present a thorough experimental study that demonstrates the effectiveness of our recommendation diversification technique and its superiority over previous solutions.",
"Recommender systems have been evaluated in many, often incomparable, ways. In this article, we review the key decisions in evaluating collaborative filtering recommender systems: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole. In addition to reviewing the evaluation strategies used by prior researchers, we present empirical results from the analysis of various accuracy metrics on one content domain where all the tested metrics collapsed roughly into three equivalence classes. Metrics within each equivalency class were strongly correlated, while metrics from different equivalency classes were uncorrelated.",
"Most recommender algorithms produce types similar to those the active user has accessed before. This is because they measure user similarity only from the co-rating behaviors against items and compute recommendations by analyzing the items possessed by the users most similar to the active user. In this paper, we define item novelty as the smallest distance from the class the user accessed before to the class that includes target items over the taxonomy. Then, we try to accurately recommend highly novel items to the user. First, our method measures user similarity by employing items rated by users and a taxonomy of items. It can accurately identify many items that may suit the user. Second, it creates a graph whose nodes are users; weighted edges are set between users according to their similarity. It analyzes the user graph and extracts users that are related on the graph though the similarity between the active user and each of those users is not high. The users so extracted are likely to have highly novel items for the active user. An evaluation conducted on several datasets finds that our method accurately identifies items with higher novelty than previous methods.",
"In this work we present topic diversification, a novel method designed to balance and diversify personalized recommendation lists in order to reflect the user's complete spectrum of interests. Though being detrimental to average accuracy, we show that our method improves user satisfaction with recommendation lists, in particular for lists generated using the common item-based collaborative filtering algorithm.Our work builds upon prior research on recommender systems, looking at properties of recommendation lists as entities in their own right rather than specifically focusing on the accuracy of individual recommendations. We introduce the intra-list similarity metric to assess the topical diversity of recommendation lists and the topic diversification approach for decreasing the intra-list similarity. We evaluate our method using book recommendation data, including offline analysis on 361, !, 349 ratings and an online study involving more than 2, !, 100 subjects."
]
} |
1507.08120 | 1622860223 | In this paper, we investigate recommender systems from a network perspective and investigate recommendation networks, where nodes are items (e.g., movies) and edges are constructed from top-N recommendations (e.g., related movies). In particular, we focus on evaluating the reachability and navigability of recommendation networks and investigate the following questions: (i) How well do recommendation networks support navigation and exploratory search? (ii) What is the influence of parameters, in particular different recommendation algorithms and the number of recommendations shown, on reachability and navigability? and (iii) How can reachability and navigability be improved in these networks? We tackle these questions by first evaluating the reachability of recommendation networks by investigating their structural properties. Second, we evaluate navigability by simulating three different models of information seeking scenarios. We find that with standard algorithms, recommender systems are not well suited to navigation and exploration and propose methods to modify recommendations to improve this. Our work extends from one-click-based evaluations of recommender systems towards multi-click analysis (i.e., sequences of dependent clicks) and presents a general, comprehensive approach to evaluating navigability of arbitrary recommendation networks. | In terms of reachability, the static topology of recommendation networks has been studied for the case of music recommenders. Their corresponding recommendation networks have been found to exhibit heavy-tail degree distributions and small-world properties @cite_27 , implying that they are efficiently navigable with local search algorithms. studied sources (nodes that are never recommended) in music recommendation networks @cite_16 and found that the fraction of sources remained constant independent of the recommendation approach and the network size. This indicates that recommendation networks generally suffer from reachability problems. Celma and Herrera @cite_26 found that collaborative filtering on led to recommendation networks that are prone to a popularity bias, with recommendations biased towards popular songs or artists. They also found that collaborative filtering provided the most accurate recommendations, while at the same time this made it harder for users to navigate to items in the long tail. A hybrid approach and content-based methods provided better novel recommendations. These results suggest that a trade-off exists between accuracy and other evaluation metrics. @cite_13 proposed to measure reachability in the bipartite recommendation graph of users and items as an evaluation measure. | {
"cite_N": [
"@cite_27",
"@cite_16",
"@cite_13",
"@cite_26"
],
"mid": [
"1999326226",
"2008717930",
"1844760852",
"2011700584"
],
"abstract": [
"We study the topology of several music recommendation networks, which arise from relationships between artist, co-occurrence of songs in play lists or experts’ recommendation. The analysis uncovers the emergence of complex network phenomena in these kinds of recommendation networks, built considering artists as nodes and their resemblance as links. We observe structural properties that provide some hints on navigation and possible optimizations on the design of music recommendation systems. Finally, the analysis derived from existing music knowledge sources provides a deeper understanding of the human music similarity perception.",
"To exploit the enormous potential of niche products, modern information systems must support users in exploring digital libraries and online catalogs. A straight-forward way of doing so is to support browsing the available items, which is in general realized by presenting a user the top-N recommendations for each item. However, recent research indicates that most of the niche products reside in the so-called Long Tail, and simple collaborative filtering-based recommender systems alone do not allow to explore these niche products. In this paper we show that it is not only a popularity problem related to the collaborative filtering approach that makes a portion of the elements of a digital library inaccessible via browsing, but also a consequence of the top N-recommendation approach itself.",
"We present a novel framework for studying recommendation algorithms in terms of the ‘jumps’ that they make to connect people to artifacts. This approach emphasizes reachability via an algorithm within the implicit graph structure underlying a recommender dataset and allows us to consider questions relating algorithmic parameters to properties of the datasets. For instance, given a particular algorithm ‘jump,’ what is the average path length from a person to an artifact? Or, what choices of minimum ratings and jumps maintain a connected graph? We illustrate the approach with a common jump called the ‘hammock’ using movie recommender datasets.",
"This paper presents two methods, named Item- and User-centric, to evaluate the quality of novel recommendations. The former method focuses on analyzing the item-based recommendation network. The aim is to detect whether the network topology has any pathology that hinders novel recommendations. The latter, user-centric evaluation, aims at measuring users' perceived quality of novel, previously unknown, recommendations. The results of the experiments, done in the music recommendation context, show that last.fm social recommender, based on collaborative filtering, is prone to popularity bias. This has direct consequences on the topology of the item-based recommendation network. Pure audio content-based methods (CB) are not affected by popularity. However, a user-centric experiment done with 288 subjects shows that even though a social-based approach recommends less novel items than our CB, users' perceived quality is better than those recommended by a pure CB method."
]
} |
1507.08120 | 1622860223 | In this paper, we investigate recommender systems from a network perspective and investigate recommendation networks, where nodes are items (e.g., movies) and edges are constructed from top-N recommendations (e.g., related movies). In particular, we focus on evaluating the reachability and navigability of recommendation networks and investigate the following questions: (i) How well do recommendation networks support navigation and exploratory search? (ii) What is the influence of parameters, in particular different recommendation algorithms and the number of recommendations shown, on reachability and navigability? and (iii) How can reachability and navigability be improved in these networks? We tackle these questions by first evaluating the reachability of recommendation networks by investigating their structural properties. Second, we evaluate navigability by simulating three different models of information seeking scenarios. We find that with standard algorithms, recommender systems are not well suited to navigation and exploration and propose methods to modify recommendations to improve this. Our work extends from one-click-based evaluations of recommender systems towards multi-click analysis (i.e., sequences of dependent clicks) and presents a general, comprehensive approach to evaluating navigability of arbitrary recommendation networks. | A simple method to improve reachability is to select recommendations specifically for their target in the network, which has been proposed by @cite_14 . In this work, we improve on this method by proposing a method that not only improves reachability but also ensures the relevance of the selected recommendations. | {
"cite_N": [
"@cite_14"
],
"mid": [
"38867610"
],
"abstract": [
"Many music portals offer the possibility to explore music collections via browsing automatically generated music recommendations. In this paper we argue that such music recommender systems can be transformed into an equivalent recommendation graph. We then analyze the recommendation graph of a real-world content-based music recommender systems to find out if users can really explore the underlying song database by following those recommendations. We find that some songs are not recommended at all and are consequently not reachable via browsing. We then take a first attempt to modify a recommendation network in such a way that the resulting network is better suited to explore the respective music space."
]
} |
1507.08120 | 1622860223 | In this paper, we investigate recommender systems from a network perspective and investigate recommendation networks, where nodes are items (e.g., movies) and edges are constructed from top-N recommendations (e.g., related movies). In particular, we focus on evaluating the reachability and navigability of recommendation networks and investigate the following questions: (i) How well do recommendation networks support navigation and exploratory search? (ii) What is the influence of parameters, in particular different recommendation algorithms and the number of recommendations shown, on reachability and navigability? and (iii) How can reachability and navigability be improved in these networks? We tackle these questions by first evaluating the reachability of recommendation networks by investigating their structural properties. Second, we evaluate navigability by simulating three different models of information seeking scenarios. We find that with standard algorithms, recommender systems are not well suited to navigation and exploration and propose methods to modify recommendations to improve this. Our work extends from one-click-based evaluations of recommender systems towards multi-click analysis (i.e., sequences of dependent clicks) and presents a general, comprehensive approach to evaluating navigability of arbitrary recommendation networks. | While these analyses have shown certain topological properties such as heavy-tail degree distributions and small-world properties @cite_27 , we know very little about the dynamics of actually using recommendations to find navigational paths through a recommender system. | {
"cite_N": [
"@cite_27"
],
"mid": [
"1999326226"
],
"abstract": [
"We study the topology of several music recommendation networks, which arise from relationships between artist, co-occurrence of songs in play lists or experts’ recommendation. The analysis uncovers the emergence of complex network phenomena in these kinds of recommendation networks, built considering artists as nodes and their resemblance as links. We observe structural properties that provide some hints on navigation and possible optimizations on the design of music recommendation systems. Finally, the analysis derived from existing music knowledge sources provides a deeper understanding of the human music similarity perception."
]
} |
1507.08254 | 1629012316 | We propose a robust and efficient approach to the problem of compressive phase retrieval in which the goal is to reconstruct a sparse vector from the magnitude of a number of its linear measurements. The proposed framework relies on constrained sensing vectors and a two-stage reconstruction method that consists of two standard convex programs that are solved sequentially. In recent years, various methods are proposed for compressive phase retrieval, but they have suboptimal sample complexity or lack robustness guarantees. The main obstacle has been that there is no straightforward convex relaxations for the type of structure in the target. Given a set of underdetermined measurements, there is a standard framework for recovering a sparse matrix, and a standard framework for recovering a low-rank matrix. However, a general, efficient method for recovering a jointly sparse and low-rank matrix has remained elusive. Deviating from the models with generic measurements, in this paper we show that if the sensing vectors are chosen at random from an incoherent subspace, then the low-rank and sparse structures of the target signal can be effectively decoupled. We show that a recovery algorithm that consists of a low-rank recovery stage followed by a sparse recovery stage will produce an accurate estimate of the target when the number of measurements is @math , where @math and @math denote the sparsity level and the dimension of the input signal. We also evaluate the algorithm through numerical simulation. | While preparing the final version of the current paper, we became aware of @cite_15 which has independently proposed an approach similar to ours to address the CPR problem. | {
"cite_N": [
"@cite_15"
],
"mid": [
"1552037902"
],
"abstract": [
"Abstract In this short note we propose a simple two-stage sparse phase retrieval strategy that uses a near-optimal number of measurements, and is both computationally efficient and robust to measurement noise. In addition, the proposed strategy is fairly general, allowing for a large number of new measurement constructions and recovery algorithms to be designed with minimal effort."
]
} |
1507.08076 | 981523069 | The pose problem is one of the bottlenecks in automatic face recognition. We argue that one of the diffculties in this problem is the severe misalignment in face images or feature vectors with different poses. In this paper, we propose that this problem can be statistically solved or at least mitigated by maximizing the intra-subject across-pose correlations via canonical correlation analysis (CCA). In our method, based on the data set with coupled face images of the same identities and across two different poses, CCA learns simultaneously two linear transforms, each for one pose. In the transformed subspace, the intra-subject correlations between the different poses are maximized, which implies pose-invariance or pose-robustness is achieved. The experimental results show that our approach could considerably improve the recognition performance. And if further enhanced with holistic+local feature representation, the performance could be comparable to the state-of-the-art. | Robustness to pose change is a challenging and classical problem in the research of face recognition. Related literature survey can be found in @cite_30 @cite_51 @cite_50 . When multiple face images under different view are available, the difficulty of the pose problem is much reduced. But in real world applications such requirement is not always feasible. In this paper, we only concern the basic and most challenging scenario: recognizing face across pose difference using single query image. According to the main contribution, related research works can be roughly categorized into three classes, i.e., the geometrical approaches, the statistical approaches and the hybrid approaches respectively. | {
"cite_N": [
"@cite_30",
"@cite_51",
"@cite_50"
],
"mid": [
"",
"2031119088",
"1989702938"
],
"abstract": [
"",
"Sparsely registering a face (i.e., locating 2---3 fiducial points) is considered a much easier task than densely registering one; especially with varying viewpoints. Unfortunately, the converse tends to be true for the task of viewpoint-invariant face verification; the more registration points one has the better the performance. In this paper we present a novel approach to viewpoint invariant face verification which we refer to as the \"patch-whole\" algorithm. The algorithm is able to obtain good verification performance with sparsely registered faces. Good performance is achieved by not assuming any alignment between gallery and probe view faces, but instead trying to learn the joint likelihood functions for faces of similar and dissimilar identities. Generalization is encouraged by factorizing the joint gallery and probe appearance likelihood, for each class, into an ensemble of \"patch-whole\" likelihoods. We make an additional contribution in this paper by reviewing existing approaches to viewpoint-invariant face verification and demonstrating how most of them fall into one of two categories; namely viewpoint-generative or viewpoint-discriminative. This categorization is instructive as it enables us to compare our \"patch-whole\" algorithm to other paradigms in viewpoint-invariant face verification and also gives deeper insights into why the algorithm performs so well.",
"As one of the most successful applications of image analysis and understanding, face recognition has recently received significant attention, especially during the past several years. At least two reasons account for this trend: the first is the wide range of commercial and law enforcement applications, and the second is the availability of feasible technologies after 30 years of research. Even though current machine recognition systems have reached a certain level of maturity, their success is limited by the conditions imposed by many real applications. For example, recognition of face images acquired in an outdoor environment with changes in illumination and or pose remains a largely unsolved problem. In other words, current systems are still far away from the capability of the human perception system.This paper provides an up-to-date critical survey of still- and video-based face recognition research. There are two underlying motivations for us to write this survey paper: the first is to provide an up-to-date review of the existing literature, and the second is to offer some insights into the studies of machine recognition of faces. To provide a comprehensive survey, we not only categorize existing recognition techniques but also present detailed descriptions of representative methods within each category. In addition, relevant topics such as psychophysical studies, system evaluation, and issues of illumination and pose variation are covered."
]
} |
1507.08076 | 981523069 | The pose problem is one of the bottlenecks in automatic face recognition. We argue that one of the diffculties in this problem is the severe misalignment in face images or feature vectors with different poses. In this paper, we propose that this problem can be statistically solved or at least mitigated by maximizing the intra-subject across-pose correlations via canonical correlation analysis (CCA). In our method, based on the data set with coupled face images of the same identities and across two different poses, CCA learns simultaneously two linear transforms, each for one pose. In the transformed subspace, the intra-subject correlations between the different poses are maximized, which implies pose-invariance or pose-robustness is achieved. The experimental results show that our approach could considerably improve the recognition performance. And if further enhanced with holistic+local feature representation, the performance could be comparable to the state-of-the-art. | Although methods like 3D morphable model achieves good performance, accurate 3D face reconstruction is still a complicated and difficult problem. @cite_37 argued that depth information of faces may be not significantly discriminative for modeling 2D pose variability. Thus, if one could obtain several such generic canonical depth maps for different input face groups, such as race, age, and gender, then face images under different poses can be easily rendered. Consequently, they proposed a 3D Generic Elastic Models for generating new views of template face. Face recognition across pose is performed by matching the input faces with the rendered faces. Besides the work of @cite_37 , Castillo and Jacobs @cite_38 @cite_10 also proposed a simplified geometrical approach for cross-pose face recognition. They simplified the 3D shape of face as a cylinder and recognized face through stereo matching. To address the problem of large pose difference, an improvement with surface slant of this approach was proposed recently @cite_35 . Since it does not need to perform 3D reconstruction, recognizing face across pose via stereo matching is simple and effective. | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_10",
"@cite_35"
],
"mid": [
"2148100798",
"2127182692",
"2157577964",
"2082276979"
],
"abstract": [
"We propose using stereo matching for 2-D face recognition across pose. We match one 2-D query image to one 2-D gallery image without performing 3-D reconstruction. Then the cost of this matching is used to evaluate the similarity of the two images. We show that this cost is robust to pose variations. To illustrate this idea we built a face recognition system on top of a dynamic programming stereo matching algorithm. The method works well even when the epipolar lines we use do not exactly fit the viewpoints. We have tested our approach on the PIE dataset. In all the experiments, our method demonstrates effective performance compared with other algorithms.",
"Classical face recognition techniques have been successful at operating under well-controlled conditions; however, they have difficulty in robustly performing recognition in uncontrolled real-world scenarios where variations in pose, illumination, and expression are encountered. In this paper, we propose a new method for real-world unconstrained pose-invariant face recognition. We first construct a 3D model for each subject in our database using only a single 2D image by applying the 3D Generic Elastic Model (3D GEM) approach. These 3D models comprise an intermediate gallery database from which novel 2D pose views are synthesized for matching. Before matching, an initial estimate of the pose of the test query is obtained using a linear regression approach based on automatic facial landmark annotation. Each 3D model is subsequently rendered at different poses within a limited search space about the estimated pose, and the resulting images are matched against the test query. Finally, we compute the distances between the synthesized images and test query by using a simple normalized correlation matcher to show the effectiveness of our pose synthesis method to real-world data. We present convincing results on challenging data sets and video sequences demonstrating high recognition accuracy under controlled as well as unseen, uncontrolled real-world scenarios using a fast implementation.",
"Face recognition across pose is a problem of fundamental importance in computer vision. We propose to address this problem by using stereo matching to judge the similarity of two, 2D images of faces seen from different poses. Stereo matching allows for arbitrary, physically valid, continuous correspondences. We show that the stereo matching cost provides a very robust measure of similarity of faces that is insensitive to pose variations. To enable this, we show that, for conditions common in face recognition, the epipolar geometry of face images can be computed using either four or three feature points. We also provide a straightforward adaptation of a stereo matching algorithm to compute the similarity between faces. The proposed approach has been tested on the CMU PIE data set and demonstrates superior performance compared to existing methods in the presence of pose variation. It also shows robustness to lighting variation.",
"2-D face recognition in the presence of large pose variations presents a significant challenge. When comparing a frontal image of a face to a near profile image, one must cope with large occlusions, non-linear correspondences, and significant changes in appearance due to viewpoint. Stereo matching has been used to handle these problems, but performance of this approach degrades with large pose changes. We show that some of this difficulty is due to the effect that foreshortening of slanted surfaces has on window-based matching methods, which are needed to provide robustness to lighting change. We address this problem by designing a new, dynamic programming stereo algorithm that accounts for surface slant. We show that on the CMU PIE dataset this method results in significant improvements in recognition performance."
]
} |
1507.08076 | 981523069 | The pose problem is one of the bottlenecks in automatic face recognition. We argue that one of the diffculties in this problem is the severe misalignment in face images or feature vectors with different poses. In this paper, we propose that this problem can be statistically solved or at least mitigated by maximizing the intra-subject across-pose correlations via canonical correlation analysis (CCA). In our method, based on the data set with coupled face images of the same identities and across two different poses, CCA learns simultaneously two linear transforms, each for one pose. In the transformed subspace, the intra-subject correlations between the different poses are maximized, which implies pose-invariance or pose-robustness is achieved. The experimental results show that our approach could considerably improve the recognition performance. And if further enhanced with holistic+local feature representation, the performance could be comparable to the state-of-the-art. | Besides the geometrical approaches, building statistical models is another popular way to recognize faces across poses. Hitherto, a typical statistical approach is the eigen light-field method proposed by @cite_6 . They built a complete appearance model including all possible pose variations. A test image can be viewed as a part of this complete model. The missing parts are estimated from the available parts. The recognition is performed by comparing the coefficients of the complete appearance model. The tied factor analysis method proposed by @cite_49 is another typical statistical approach. In this method some tied factors across pose difference are learned using Expectation Maximization algorithm. Then, face recognition is performed based on the probabilistic distance metric that built on the factor loadings. Besides principal component analysis and factor analysis, @cite_17 Sharma and Jacobs @cite_28 applied partial least squares in cross-pose face recognition. They used partial least squares to learn a pair of projection matrix for two different poses, and cross-pose face recognition is performed by comparing the intermediate correlated projections''. | {
"cite_N": [
"@cite_28",
"@cite_49",
"@cite_6",
"@cite_17"
],
"mid": [
"2030899956",
"2154211011",
"2137695006",
"2001565863"
],
"abstract": [
"This paper presents a novel way to perform multi-modal face recognition. We use Partial Least Squares (PLS) to linearly map images in different modalities to a common linear subspace in which they are highly correlated. PLS has been previously used effectively for feature selection in face recognition. We show both theoretically and experimentally that PLS can be used effectively across modalities. We also formulate a generic intermediate subspace comparison framework for multi-modal recognition. Surprisingly, we achieve high performance using only pixel intensities as features. We experimentally demonstrate the highest published recognition rates on the pose variations in the PIE data set, and also show that PLS can be used to compare sketches to photos, and to compare images taken at different resolutions.",
"Face recognition algorithms perform very unreliably when the pose of the probe face is different from the gallery face: typical feature vectors vary more with pose than with identity. We propose a generative model that creates a one-to-many mapping from an idealized \"identity\" space to the observed data space. In identity space, the representation for each individual does not vary with pose. We model the measured feature vector as being generated by a pose-contingent linear transformation of the identity variable in the presence of Gaussian noise. We term this model \"tied\" factor analysis. The choice of linear transformation (factors) depends on the pose, but the loadings are constant (tied) for a given individual. We use the EM algorithm to estimate the linear transformations and the noise parameters from training data. We propose a probabilistic distance metric that allows a full posterior over possible matches to be established. We introduce a novel feature extraction process and investigate recognition performance by using the FERET, XM2VTS, and PIE databases. Recognition performance compares favorably with contemporary approaches.",
"Arguably the most important decision to be made when developing an object recognition algorithm is selecting the scene measurements or features on which to base the algorithm. In appearance-based object recognition, the features are chosen to be the pixel intensity values in an image of the object. These pixel intensities correspond directly to the radiance of light emitted from the object along certain rays in space. The set of all such radiance values over all possible rays is known as the plenoptic function or light-field. In this paper, we develop a theory of appearance-based object recognition from light-fields. This theory leads directly to an algorithm for face recognition across pose that uses as many images of the face as are available, from one upwards. All of the pixels, whichever image they come from, are treated equally and used to estimate the (eigen) light-field of the object. The eigen light-field is then used as the set of features on which to base recognition, analogously to how the pixel intensities are used in appearance-based face and object recognition.",
"The pose problem is one of the bottlenecks for face recognition. In this paper we propose a novel cross-pose face recognition method based on partial least squares (PLS). By training on the coupled face images of the same identities and across two different poses, PLS maximizes the squares of the intra-individual correlations. Therefore, it leads to improvements in recognizing faces across pose differences. The experimental results demonstrate the effectiveness of the proposed method."
]
} |
1507.08076 | 981523069 | The pose problem is one of the bottlenecks in automatic face recognition. We argue that one of the diffculties in this problem is the severe misalignment in face images or feature vectors with different poses. In this paper, we propose that this problem can be statistically solved or at least mitigated by maximizing the intra-subject across-pose correlations via canonical correlation analysis (CCA). In our method, based on the data set with coupled face images of the same identities and across two different poses, CCA learns simultaneously two linear transforms, each for one pose. In the transformed subspace, the intra-subject correlations between the different poses are maximized, which implies pose-invariance or pose-robustness is achieved. The experimental results show that our approach could considerably improve the recognition performance. And if further enhanced with holistic+local feature representation, the performance could be comparable to the state-of-the-art. | Besides building the pose-robust statistical models, statistically transforming face or features from one pose to another is another way to tackle the pose problem. @cite_16 transformed the frontal face model to non-frontal views for extending the gallery set. Lee and Kim @cite_12 constructed feature spaces for each pose using kernel principal component analysis, and then transformed the non-frontal face to frontal through the feature spaces. Different from the foregoing two methods that transform holistic faces, @cite_20 performed linear regression on local patches for virtual frontal view synthesis. @cite_5 embedded bias-variance trade-off in the cross-pose linear regression models by using ridge and lasso regression. Such bias-variance trade-off achieved considerable improvements in recognition performance. @cite_11 applied null space linear discriminant analysis in their face recognition approach dealing with both pose and illumination variations. | {
"cite_N": [
"@cite_5",
"@cite_16",
"@cite_12",
"@cite_20",
"@cite_11"
],
"mid": [
"",
"2097728881",
"2049494507",
"2097437748",
"2099230815"
],
"abstract": [
"",
"We address the pose mismatch problem which can occur in face verification systems that have only a single (frontal) face image available for training. In the framework of a Bayesian classifier based on mixtures of gaussians, the problem is tackled through extending each frontal face model with artificially synthesized models for non-frontal views. The synthesis methods are based on several implementations of maximum likelihood linear regression (MLLR), as well as standard multi-variate linear regression (LinReg). All synthesis techniques rely on prior information and learn how face models for the frontal view are related to face models for non-frontal views. The synthesis and extension approach is evaluated by applying it to two face verification systems: a holistic system (based on PCA-derived features) and a local feature system (based on DCT-derived features). Experiments on the FERET database suggest that for the holistic system, the LinReg-based technique is more suited than the MLLR-based techniques; for the local feature system, the results show that synthesis via a new MLLR implementation obtains better performance than synthesis based on traditional MLLR. The results further suggest that extending frontal models considerably reduces errors. It is also shown that the local feature system is less affected by view changes than the holistic system; this can be attributed to the parts based representation of the face, and, due to the classifier based on mixtures of gaussians, the lack of constraints on spatial relations between the face parts, allowing for deformations and movements of face areas.",
"Recognizing human faces is one of the most important areas of research in biometrics. However, drastic change of facial poses is a big challenge for its practical application. This paper proposes generating frontal view face image using linear transformation in feature space for face recognition. We extract features from a posed face image using the kernel PCA. Then, we transform the posed face image into its corresponding frontal face image using the transformation matrix predetermined by learning. Then, the generated frontal face image is identified by three different discrimination methods such as LDA, NDA, or GDA. Experimental results show that the recognition rate with the pose transformation outperforms that without pose transformation greatly.",
"The variation of facial appearance due to the viewpoint ( pose) degrades face recognition systems considerably, which is one of the bottlenecks in face recognition. One of the possible solutions is generating virtual frontal view from any given nonfrontal view to obtain a virtual gallery probe face. Following this idea, this paper proposes a simple, but efficient, novel locally linear regression (LLR) method, which generates the virtual frontal view from a given nonfrontal face image. We first justify the basic assumption of the paper that there exists an approximate linear mapping between a nonfrontal face image and its frontal counterpart. Then, by formulating the estimation of the linear mapping as a prediction problem, we present the regression-based solution, i.e., globally linear regression. To improve the prediction accuracy in the case of coarse alignment, LLR is further proposed. In LLR, we first perform dense sampling in the nonfrontal face image to obtain many overlapped local patches. Then, the linear regression technique is applied to each small patch for the prediction of its virtual frontal patch. Through the combination of all these patches, the virtual frontal view is generated. The experimental results on the CMU PIE database show distinct advantage of the proposed method over Eigen light-field method.",
"We propose a novel 2D image-based approach that can simultaneously handle illumination and pose variations to enhance face recognition rate. It is much simpler, requires much less computational effort than the methods based on 3D models, and provides a comparable or better recognition rate."
]
} |
1507.08076 | 981523069 | The pose problem is one of the bottlenecks in automatic face recognition. We argue that one of the diffculties in this problem is the severe misalignment in face images or feature vectors with different poses. In this paper, we propose that this problem can be statistically solved or at least mitigated by maximizing the intra-subject across-pose correlations via canonical correlation analysis (CCA). In our method, based on the data set with coupled face images of the same identities and across two different poses, CCA learns simultaneously two linear transforms, each for one pose. In the transformed subspace, the intra-subject correlations between the different poses are maximized, which implies pose-invariance or pose-robustness is achieved. The experimental results show that our approach could considerably improve the recognition performance. And if further enhanced with holistic+local feature representation, the performance could be comparable to the state-of-the-art. | The geometrical alignment can directly reduce the pose difference, but the alignment itself cannot deal with the occlusion. Statistical approaches can mitigate the occlusion problem to some extent, their performance highly relies on the training data. Thus, combining geometrical and statistical information is an alternative way to tackle the pose problem. We term this kind of methods the hybrid approach. One way to integrate geometrical and statistical information is combining local statistical models with coarse geometrical alignment. Kanade and Yamada @cite_22 proposed a probabilistic framework to build and combined local statistical models for recognizing faces under different viewpoints, in which the statistical models are built on local patches on face images. Based on this framework, Liu and Chen @cite_26 used a simple 3D ellipsoid model to align patches across different pose, while @cite_29 aligned the patches by learning 2D affine transform for each patch pair via a Lucas-Kanade @cite_2 like optimization procedure. Unlike the above methods that concern on matching local patches, Lucey and Chen @cite_7 extended Kanade and Yamada's work by building similar statistical models between holistic non-frontal faces and local patches on frontal face. | {
"cite_N": [
"@cite_26",
"@cite_22",
"@cite_7",
"@cite_29",
"@cite_2"
],
"mid": [
"2131518659",
"2171193910",
"2140078189",
"2147945459",
"2118877769"
],
"abstract": [
"Researchers have been working on human face recognition for decades. Face recognition is hard due to different types of variations in face images, such as pose, illumination and expression, among which pose variation is the hardest one to deal with. To improve face recognition under pose variation, this paper presents a geometry assisted probabilistic approach. We approximate a human head with a 3D ellipsoid model, so that any face image is a 2D projection of such a 3D ellipsoid at a certain pose. In this approach, both training and test images are back projected to the surface of the 3D ellipsoid, according to their estimated poses, to form the texture maps. Thus the recognition can be conducted by comparing the texture maps instead of the original images, as done in traditional face recognition. In addition, we represent the texture map as an array of local patches, which enables us to train a probabilistic model for comparing corresponding patches. By conducting experiments on the CMU PIE database, we show that the proposed algorithm provides better performance than the existing algorithms.",
"Current automatic facial recognition systems are not robust against changes in illumination, pose, facial expression and occlusion. In this paper, we propose an algorithm based on a probabilistic approach for face recognition to address the problem of pose change by a probabilistic approach that takes into account the pose difference between probe and gallery images. By using a large facial image database called CMU PIE database, which contains images of the same set of people taken from many different angles, we have developed a probabilistic model of how facial features change as the pose changes. This model enables us to make our face recognition system more robust to the change of poses in the probe image. The experimental results show that this approach achieves a better recognition rate than conventional face recognition methods over a much larger range of pose. For example, when the gallery contains only images of a frontal face and the probe image varies its pose orientation, the recognition rate remains within a less than 10 difference until the probe pose begins to differ more than 45 degrees, whereas the recognition rate of a PCA-based method begins to drop at a difference as small as 10 degrees, and a representative commercial system at 30 degrees.",
"Most pose robust face verification algorithms, which employ 2D appearance, rely heavily on statistics gathered from offline databases containing ample facial appearance variation across many views. Due to the high dimensionality of the face images being employed, the validity of the assumptions employed in obtaining these statistics are essential for good performance. In this paper we assess three common approaches in 2D appearance pose mismatched face recognition literature. In our experiments we demonstrate where these approaches work and fail. As a result of this analysis, we additionally propose a new algorithm that attempts to learn the statistical dependency between gallery patches (i.e. local regions of pixels) and the whole appearance of the probe image. We demonstrate improved performance over a number of leading 2D appearance face recognition algorithms.",
"Variation due to viewpoint is one of the key challenges that stand in the way of a complete solution to the face recognition problem. It is easy to note that local regions of the face change differently in appearance as the viewpoint varies. Recently, patch-based approaches, such as those of Kanade and Yamada, have taken advantage of this effect resulting in improved viewpoint invariant face recognition. In this paper we propose a data-driven extension to their approach, in which we not only model how a face patch varies in appearance, but also how it deforms spatially as the viewpoint varies. We propose a novel alignment strategy which we refer to as ldquostack flowrdquo that discovers viewpoint induced spatial deformities undergone by a face at the patch level. One can then view the spatial deformation of a patch as the correspondence of that patch between two viewpoints. We present improved identification and verification results to demonstrate the utility of our technique.",
"Image registration finds a variety of applications in computer vision. Unfortunately, traditional image registration techniques tend to be costly. We present a new image registration technique that makes use of the spatial intensity gradient of the images to find a good match using a type of Newton-Raphson iteration. Our technique is taster because it examines far fewer potential matches between the images than existing techniques Furthermore, this registration technique can be generalized to handle rotation, scaling and shearing. We show how our technique can be adapted tor use in a stereo vision system."
]
} |
1507.08286 | 998260344 | Deep learning methods have typically been trained on large datasets in which many training examples are available. However, many real-world product datasets have only a small number of images available for each product. We explore the use of deep learning methods for recognizing object instances when we have only a single training example per class. We show that feedforward neural networks outperform state-of-the-art methods for recognizing objects from novel viewpoints even when trained from just a single image per object. To further improve our performance on this task, we propose to take advantage of a supplementary dataset in which we observe a separate set of objects from multiple viewpoints. We introduce a new approach for training deep learning methods for instance recognition with limited training data, in which we use an auxiliary multi-view dataset to train our network to be robust to viewpoint changes. We find that this approach leads to a more robust classifier for recognizing objects from novel viewpoints, outperforming previous state-of-the-art approaches including keypoint-matching, template-based techniques, and sparse coding. | Instance recognition has traditionally been achieved using either keypoint-based methods @cite_18 @cite_8 or by matching local image patches @cite_11 @cite_57 @cite_56 . Keypoints can be filtered using different criteria @cite_18 and validated using RANSAC or Hough Voting to ensure geometric consistency @cite_33 . Although keypoint-based approaches have shown some success for image recognition @cite_10 , such methods are unreliable for recognizing untextured objects or non-planar objects when the viewpoint is changed by more than 25 degrees @cite_44 . | {
"cite_N": [
"@cite_18",
"@cite_33",
"@cite_8",
"@cite_56",
"@cite_57",
"@cite_44",
"@cite_10",
"@cite_11"
],
"mid": [
"2151103935",
"2085261163",
"",
"",
"",
"2054511520",
"2148114917",
"1607217798"
],
"abstract": [
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.",
"A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing",
"",
"",
"",
"We explore the performance of a number of popular feature detectors and descriptors in matching 3D object features across viewpoints and lighting conditions. To this end we design a method, based on intersecting epipolar constraints, for providing ground truth correspondence automatically. These correspondences are based purely on geometric information, and do not rely on the choice of a specific feature appearance descriptor. We test detector-descriptor combinations on a database of 100 objects viewed from 144 calibrated viewpoints under three different lighting conditions. We find that the combination of Hessian-affine feature finder and SIFT features is most robust to viewpoint change. Harris-affine combined with SIFT and Hessian-affine combined with shape context descriptors were best respectively for lighting change and change in camera focal length. We also find that no detector-descriptor combination performs well with viewpoint changes of more than 25---30?.",
"Many mobile visual search (MVS) systems transmit query data from a mobile device to a remote server and search a database hosted on the server. In this paper, we present a new architecture for searching a large database directly on a mobile device, which can provide numerous benefits for network-independent, low-latency, and privacy-protected image retrieval. A key challenge for on-device retrieval is storing a large database in the limited RAM of a mobile device. To address this challenge, we develop a new compact, discriminative image signature called the Residual Enhanced Visual Vector (REVV) that is optimized for sets of local features which are fast to extract on mobile devices. REVV outperforms existing compact database constructions in the MVS setting and attains similar retrieval accuracy in large-scale retrieval as a Vocabulary Tree that uses 25x more memory. We have utilized REVV to design and construct a mobile augmented reality system for accurate, large-scale landmark recognition. Fast on-device search with REVV enables our system to achieve latencies around 1s per query regardless of external network conditions. The compactness of REVV allows it to also function well as a low-bitrate signature that can be transmitted to or from a remote server for an efficient expansion of the local database search when required.",
"Methods based on local, viewpoint invariant features have proven capable of recognizing objects in spite of viewpoint changes, occlusion and clutter. However, these approaches fail when these factors are too strong, due to the limited repeatability and discriminative power of the features. As additional shortcomings, the objects need to be rigid and only their approximate location is found. We present a novel Object Recognition approach which overcomes these limitations. An initial set of feature correspondences is first generated. The method anchors on it and then gradually explores the surrounding area, trying to construct more and more matching features, increasingly farther from the initial ones. The resulting process covers the object with matches, and simultaneously separates the correct matches from the wrong ones. Hence, recognition and segmentation are achieved at the same time. Only very few correct initial matches suffice for reliable recognition. The experimental results demonstrate the stronger power of the presented method in dealing with extensive clutter, dominant occlusion, large scale and viewpoint changes. Moreover non-rigid deformations are explicitly taken into account, and the approximative contours of the object are produced. The approach can extend any viewpoint invariant feature extractor."
]
} |
1507.08173 | 2161041254 | Mining useful clusters from high dimensional data have received significant attention of the computer vision and pattern recognition community in the recent years. Linear and nonlinear dimensionality reduction has played an important role to overcome the curse of dimensionality. However, often such methods are accompanied with three different problems: high computational complexity (usually associated with the nuclear norm minimization), nonconvexity (for matrix factorization methods), and susceptibility to gross corruptions in the data. In this paper, we propose a principal component analysis (PCA) based solution that overcomes these three issues and approximates a low-rank recovery method for high dimensional datasets. We target the low-rank recovery by enforcing two types of graph smoothness assumptions, one on the data samples and the other on the features by designing a convex optimization problem. The resulting algorithm is fast, efficient, and scalable for huge datasets with O(n log(n)) computational complexity in the number of data samples. It is also robust to gross corruptions in the dataset as well as to the model parameters. Clustering experiments on 7 benchmark datasets with different types of corruptions and background separation experiments on 3 video datasets show that our proposed model outperforms 10 state-of-the-art dimensionality reduction models. Our theoretical analysis proves that the proposed model is able to recover approximate low-rank representations with a bounded error for clusterable data. | A main drawback of PCA is its sensitivity to heavy-tailed noise due to the Frobenius norm in the objective function. Thus, a few strong corruptions can result in erratic principal components. Robust PCA (RPCA) proposed by @cite_29 overcomes this problem by recovering the clean low-rank representation @math from grossly corrupted @math by solving model 4 in Fig. . Here @math represents the sparse matrix containing the errors and @math denotes the nuclear norm of @math , the tightest convex relaxation of @math . | {
"cite_N": [
"@cite_29"
],
"mid": [
"2091741383"
],
"abstract": [
"Abstract Foreground detection is the first step in video surveillance system to detect moving objects. Recent research on subspace estimation by sparse representation and rank minimization represents a nice framework to separate moving objects from the background. Robust Principal Component Analysis (RPCA) solved via Principal Component Pursuit decomposes a data matrix A in two components such that A = L + S , where L is a low-rank matrix and S is a sparse noise matrix. The background sequence is then modeled by a low-rank subspace that can gradually change over time, while the moving foreground objects constitute the correlated sparse outliers. To date, many efforts have been made to develop Principal Component Pursuit (PCP) methods with reduced computational cost that perform visually well in foreground detection. However, no current algorithm seems to emerge and to be able to simultaneously address all the key challenges that accompany real-world videos. This is due, in part, to the absence of a rigorous quantitative evaluation with synthetic and realistic large-scale dataset with accurate ground truth providing a balanced coverage of the range of challenges present in the real world. In this context, this work aims to initiate a rigorous and comprehensive review of RPCA-PCP based methods for testing and ranking existing algorithms for foreground detection. For this, we first review the recent developments in the field of RPCA solved via Principal Component Pursuit. Furthermore, we investigate how these methods are solved and if incremental algorithms and real-time implementations can be achieved for foreground detection. Finally, experimental results on the Background Models Challenge (BMC) dataset which contains different synthetic and real datasets show the comparative performance of these recent methods."
]
} |
1507.08173 | 2161041254 | Mining useful clusters from high dimensional data have received significant attention of the computer vision and pattern recognition community in the recent years. Linear and nonlinear dimensionality reduction has played an important role to overcome the curse of dimensionality. However, often such methods are accompanied with three different problems: high computational complexity (usually associated with the nuclear norm minimization), nonconvexity (for matrix factorization methods), and susceptibility to gross corruptions in the data. In this paper, we propose a principal component analysis (PCA) based solution that overcomes these three issues and approximates a low-rank recovery method for high dimensional datasets. We target the low-rank recovery by enforcing two types of graph smoothness assumptions, one on the data samples and the other on the features by designing a convex optimization problem. The resulting algorithm is fast, efficient, and scalable for huge datasets with O(n log(n)) computational complexity in the number of data samples. It is also robust to gross corruptions in the dataset as well as to the model parameters. Clustering experiments on 7 benchmark datasets with different types of corruptions and background separation experiments on 3 video datasets show that our proposed model outperforms 10 state-of-the-art dimensionality reduction models. Our theoretical analysis proves that the proposed model is able to recover approximate low-rank representations with a bounded error for clusterable data. | Recently, many works related to low-rank or sparse representation recovery have been proposed to incorporate the data manifold information in the form of a discrete graph into the dimensionality reduction framework @cite_1 @cite_7 @cite_26 @cite_2 @cite_45 @cite_12 @cite_25 @cite_18 @cite_37 . In fact, for PCA, this can be considered as a method of exploiting the local smoothness information in order to improve clustering quality. The graph smoothness of the principal components @math using the graph Laplacian @math has been exploited in various works that explicitly learn @math and the basis @math . We refer to such models as . In this context Graph Laplacian PCA (GLPCA) was proposed in @cite_1 (model 5 in Fig. ) and Manifold Regularized Matrix Factorization (MMF) in @cite_43 (model 6 in Fig. ). Note that the orthonormality constraint in this model is on @math , instead of the principal components @math . Later on, the authors of @cite_31 have generalized robust PCA by incorporating the graph smoothness (model 7 in Fig. ) term directly on the low-rank matrix instead of principal components. They call it Robust PCA on Graphs (RPCAG). | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_37",
"@cite_7",
"@cite_1",
"@cite_43",
"@cite_45",
"@cite_2",
"@cite_31",
"@cite_25",
"@cite_12"
],
"mid": [
"2104144964",
"",
"1996838598",
"",
"2106210113",
"2028302281",
"",
"",
"2149532724",
"1994761691",
"2009226074"
],
"abstract": [
"a b s t r a c t Constructing an informative and discriminative graph plays an important role in various pattern recogni- tion tasks such as clustering and classification. Among the existing graph-based learning models, low-rank representation (LRR) is a very competitive one, which has been extensively employed in spectral cluster- ing and semi-supervised learning (SSL). In SSL, the graph is composed of both labeled and unlabeled sam- ples, where the edge weights are calculated based on the LRR coefficients. However, most of existing LRR related approaches fail to consider the geometrical structure of data, which has been shown beneficial for discriminative tasks. In this paper, we propose an enhanced LRR via sparse manifold adaption, termed manifold low-rank representation (MLRR), to learn low-rank data representation. MLRR can explicitly take the data local manifold structure into consideration, which can be identified by the geometric sparsity idea; specifically, the local tangent space of each data point was sought by solving a sparse representation objective. Therefore, the graph to depict the relationship of data points can be built once the manifold information is obtained. We incorporate a regularizer into LRR to make the learned coefficients preserve the geometric constraints revealed in the data space. As a result, MLRR combines both the global informa- tion emphasized by low-rank property and the local information emphasized by the identified manifold structure. Extensive experimental results on semi-supervised classification tasks demonstrate that MLRR is an excellent method in comparison with several state-of-the-art graph construction approaches.",
"",
"This paper proposes a graph regularized low-rank sparse representation recovery (GLRSRR) method for sparse representation-based robust face recognition, in which both the training and test samples might be corrupted because of illumination variations, pose changes, and occlusions. On the one hand, GLRSRR imposes both the lowest-rank and sparsest constraints on the representation matrix of the training samples, which makes the recovered clean training samples discriminative while maintaining the global structure of data. Simultaneously, GLRSRR explicitly encodes the local structure information of data and the discriminative information of different classes by incorporating a graph regularization term, which further improves the discriminative ability of the recovered clean training samples for sparse representation. As a result, a test sample is compactly represented by more clean training samples from the correct class. On the other hand, since the error matrix obtained by GLRSRR can accurately and intuitively characterize the corruption and occlusion of face image, it can be used as occlusion dictionary for sparse representation. This will bring more accurate representations of the corrupted test samples. The experimental results on several benchmark face image databases manifest the effectiveness and robustness of GLRSRR. A graph regularized low-rank sparse representation recovery method is proposed.The recovered clean training samples have more discriminative ability.The obtained errors can accurately depict corruption and occlusion of face image.A corrupted test sample is encoded by more training samples from the correct class.Our method improves the performance of sparse representation-base classification.",
"",
"Principal Component Analysis (PCA) is a widely used to learn a low-dimensional representation. In many applications, both vector data X and graph data W are available. Laplacian embedding is widely used for embedding graph data. We propose a graph-Laplacian PCA (gLPCA) to learn a low dimensional representation of X that incorporates graph structures encoded in W. This model has several advantages: (1) It is a data representation model. (2) It has a compact closed-form solution and can be efficiently computed. (3) It is capable to remove corruptions. Extensive experiments on 8 datasets show promising results on image reconstruction and significant improvement on clustering and classification.",
"This paper proposes a new model of low-rank matrix factorization that incorporates manifold regularization to the matrix factorization. Superior to the graph-regularized nonnegative matrix factorization, this new regularization model has globally optimal and closed-form solutions. A direct algorithm (for data with small number of points) and an alternate iterative algorithm with inexact inner iteration (for large scale data) are proposed to solve the new model. A convergence analysis establishes the global convergence of the iterative algorithm. The efficiency and precision of the algorithm are demonstrated numerically through applications to six real-world datasets on clustering and classification. Performance comparison with existing algorithms shows the effectiveness of the proposed method for low-rank factorization in general.",
"",
"",
"Principal Component Analysis (PCA) is the most widely used tool for linear dimensionality reduction and clustering. Still it is highly sensitive to outliers and does not scale well with respect to the number of data samples. Robust PCA solves the first issue with a sparse penalty term. The second issue can be handled with the matrix factorization model, which is however non-convex. Besides, PCA based clustering can also be enhanced by using a graph of data similarity. In this article, we introduce a new model called 'Robust PCA on Graphs' which incorporates spectral graph regularization into the Robust PCA framework. Our proposed model benefits from 1) the robustness of principal components to occlusions and missing values, 2) enhanced low-rank recovery, 3) improved clustering property due to the graph smoothness assumption on the low-rank matrix, and 4) convexity of the resulting optimization problem. Extensive experiments on 8 benchmark, 3 video and 2 artificial datasets with corruptions clearly reveal that our model outperforms 10 other state-of-the-art models in its clustering and low-rank recovery tasks.",
"This paper presents a novel low-rank matrix factorization method, named MultiHMMF, which incorporates multiple Hypergraph manifold regularization to the low-rank matrix factorization. In order to effectively exploit high order information among the data samples, the Hypergraph is introduced to model the local structure of the intrinsic manifold. Specifically, multiple Hypergraph regularization terms are separately constructed to consider the local invariance; the optimal intrinsic manifold is constructed by linearly combining multiple Hypergraph manifolds. Then, the regularization term is incorporated into a truncated singular value decomposition framework resulting in a unified objective function so that matrix factorization is changed into an optimization problem. Alternating optimization is used to solve the optimization problem, with the result that the low dimensional representation of data space is obtained. The experimental results of image clustering demonstrate that the proposed method outperforms state-of-the-art data representation methods. We propose a novel low rank matrix factorization method.We incorporate multiple Hypergraph manifold regularization to the matrix factorization.We adopt alternating optimization to solve the optimization problem.",
"Manifold regularized sparse coding shows promising performance for various applications. The key issue that must be considered in the application is how to adaptively select the suitable graph hyper-parameters in manifold learning for the sparse coding task. Usually, cross validation is applied, but it does not necessarily scale up and easily leads to overfitting. In this article, multiple graph sparse coding (MGrSc) and multiple Hypergraph sparse coding (MHGrSc) for image representation are proposed. Inspired by the Ensemble Manifold Regularizer, we formulate multiple graph and multiple Hypergraph regularizers to guarantee the smoothness of sparse codes along the geodesics of a data manifold, which is characterized by fusing the multiple previously given graph Laplacians or Hypergraph Laplacians. Then, the proposed regularziers, respectively, are incorporated into the traditional sparse coding framework, which results in two unified objective functions of sparse coding. Alternating optimization is used to optimize the objective functions, and two, novel manifold regularized sparse coding algorithms are presented. The proposed two sparse coding methods learn both the composite manifold and the sparse coding jointly, and it is fully automatic for learning the graph hyper-parameters in the manifold learning. Image clustering tests on real world datasets demonstrated that the proposed sparse coding methods are superior to the state-of-the-art methods."
]
} |
1507.08173 | 2161041254 | Mining useful clusters from high dimensional data have received significant attention of the computer vision and pattern recognition community in the recent years. Linear and nonlinear dimensionality reduction has played an important role to overcome the curse of dimensionality. However, often such methods are accompanied with three different problems: high computational complexity (usually associated with the nuclear norm minimization), nonconvexity (for matrix factorization methods), and susceptibility to gross corruptions in the data. In this paper, we propose a principal component analysis (PCA) based solution that overcomes these three issues and approximates a low-rank recovery method for high dimensional datasets. We target the low-rank recovery by enforcing two types of graph smoothness assumptions, one on the data samples and the other on the features by designing a convex optimization problem. The resulting algorithm is fast, efficient, and scalable for huge datasets with O(n log(n)) computational complexity in the number of data samples. It is also robust to gross corruptions in the dataset as well as to the model parameters. Clustering experiments on 7 benchmark datasets with different types of corruptions and background separation experiments on 3 video datasets show that our proposed model outperforms 10 state-of-the-art dimensionality reduction models. Our theoretical analysis proves that the proposed model is able to recover approximate low-rank representations with a bounded error for clusterable data. | Models 4 to 8 can be used for clustering in the low dimensional space. However, each of them comes with its own weaknesses. GLPCA @cite_1 and MMF @cite_43 improve upon the classical PCA by incorporating graph smoothness but they are non-convex and susceptible to data corruptions. Moreover, the rank @math of the subspace has to be specified upfront. RPCAG @cite_31 is convex and builds on the robustness property of RPCA @cite_29 by incorporating the graph smoothness directly on the low-rank matrix and improves both the clustering and low-rank recovery properties of PCA. However, it uses the nuclear norm relaxation that involves an expensive SVD step in every iteration of the algorithm. Although fast methods for the SVD have been proposed, based on randomization @cite_34 @cite_17 @cite_15 , Frobenius norm based representations @cite_35 @cite_13 or structured RPCA @cite_30 , its use in each iteration makes it hard to scale to large datasets. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_15",
"@cite_29",
"@cite_1",
"@cite_43",
"@cite_31",
"@cite_34",
"@cite_13",
"@cite_17"
],
"mid": [
"2163249222",
"2008138666",
"1950149046",
"2091741383",
"2106210113",
"2028302281",
"2149532724",
"2061844057",
"1507780841",
"1555730397"
],
"abstract": [
"A large number of problems arising in computer vision can be reduced to the problem of minimizing the nuclear norm of a matrix, subject to additional structural and sparsity constraints on its elements. Examples of relevant applications include, among others, robust tracking in the presence of outliers, manifold embedding, event detection, in-painting and tracklet matching across occlusion. In principle, these problems can be reduced to a convex semi-definite optimization form and solved using interior point methods. However, the poor scaling properties of these methods limit the use of this approach to relatively small sized problems. The main result of this paper shows that structured nuclear norm minimization problems can be efficiently solved by using an iterative Augmented Lagrangian Type (ALM) method that only requires performing at each iteration a combination of matrix thresholding and matrix inversion steps. As we illustrate in the paper with several examples, the proposed algorithm results in a substantial reduction of computational time and memory requirements when compared against interior-point methods, opening up the possibility of solving realistic, large sized problems.",
"Low-rank representation (LRR) intends to find the representation with lowest rank of a given data set, which can be formulated as a rank-minimisation problem. Since the rank operator is non-convex and discontinuous, most of the recent works use the nuclear norm as a convex relaxation. It is theoretically shown that, under some conditions, the Frobenius-norm-based optimisation problem has a unique solution that is also a solution of the original LRR optimisation problem. In other words, it is feasible to apply the Frobenius norm as a surrogate of the non-convex matrix rank function. This replacement will largely reduce the time costs for obtaining the lowest-rank solution. Experimental results show that the method (i.e. fast LRR (fLRR)) performs well in terms of accuracy and computation speed in image clustering and motion segmentation compared with nuclear-norm-based LRR algorithm.",
"Rank minimization problem can be boiled down to either Nuclear Norm Minimization (NNM) or Weighted NNM (WNNM) problem. The problems related to NNM (or WNNM) can be solved iteratively by applying a closed-form proximal operator, called Singular Value Thresholding (SVT) (or Weighted SVT), but they suffer from high computational cost to compute a Singular Value Decomposition (SVD) at each iteration. In this paper, we propose an accurate and fast approximation method for SVT, called fast randomized SVT (FRSVT), where we avoid direct computation of SVD. The key idea is to extract an approximate basis for the range of a matrix from its compressed matrix. Given the basis, we compute the partial singular values of the original matrix from a small factored matrix. While the basis approximation is the bottleneck, our method is already severalfold faster than thin SVD. By adopting a range propagation technique, we can further avoid one of the bottleneck at each iteration. Our theoretical analysis provides a stepping stone between the approximation bound of SVD and its effect to NNM via SVT. Along with the analysis, our empirical results on both quantitative and qualitative studies show our approximation rarely harms the convergence behavior of the host algorithms. We apply it and validate the efficiency of our method on various vision problems, e.g. subspace clustering, weather artifact removal, simultaneous multi-image alignment and rectification.",
"Abstract Foreground detection is the first step in video surveillance system to detect moving objects. Recent research on subspace estimation by sparse representation and rank minimization represents a nice framework to separate moving objects from the background. Robust Principal Component Analysis (RPCA) solved via Principal Component Pursuit decomposes a data matrix A in two components such that A = L + S , where L is a low-rank matrix and S is a sparse noise matrix. The background sequence is then modeled by a low-rank subspace that can gradually change over time, while the moving foreground objects constitute the correlated sparse outliers. To date, many efforts have been made to develop Principal Component Pursuit (PCP) methods with reduced computational cost that perform visually well in foreground detection. However, no current algorithm seems to emerge and to be able to simultaneously address all the key challenges that accompany real-world videos. This is due, in part, to the absence of a rigorous quantitative evaluation with synthetic and realistic large-scale dataset with accurate ground truth providing a balanced coverage of the range of challenges present in the real world. In this context, this work aims to initiate a rigorous and comprehensive review of RPCA-PCP based methods for testing and ranking existing algorithms for foreground detection. For this, we first review the recent developments in the field of RPCA solved via Principal Component Pursuit. Furthermore, we investigate how these methods are solved and if incremental algorithms and real-time implementations can be achieved for foreground detection. Finally, experimental results on the Background Models Challenge (BMC) dataset which contains different synthetic and real datasets show the comparative performance of these recent methods.",
"Principal Component Analysis (PCA) is a widely used to learn a low-dimensional representation. In many applications, both vector data X and graph data W are available. Laplacian embedding is widely used for embedding graph data. We propose a graph-Laplacian PCA (gLPCA) to learn a low dimensional representation of X that incorporates graph structures encoded in W. This model has several advantages: (1) It is a data representation model. (2) It has a compact closed-form solution and can be efficiently computed. (3) It is capable to remove corruptions. Extensive experiments on 8 datasets show promising results on image reconstruction and significant improvement on clustering and classification.",
"This paper proposes a new model of low-rank matrix factorization that incorporates manifold regularization to the matrix factorization. Superior to the graph-regularized nonnegative matrix factorization, this new regularization model has globally optimal and closed-form solutions. A direct algorithm (for data with small number of points) and an alternate iterative algorithm with inexact inner iteration (for large scale data) are proposed to solve the new model. A convergence analysis establishes the global convergence of the iterative algorithm. The efficiency and precision of the algorithm are demonstrated numerically through applications to six real-world datasets on clustering and classification. Performance comparison with existing algorithms shows the effectiveness of the proposed method for low-rank factorization in general.",
"Principal Component Analysis (PCA) is the most widely used tool for linear dimensionality reduction and clustering. Still it is highly sensitive to outliers and does not scale well with respect to the number of data samples. Robust PCA solves the first issue with a sparse penalty term. The second issue can be handled with the matrix factorization model, which is however non-convex. Besides, PCA based clustering can also be enhanced by using a graph of data similarity. In this article, we introduce a new model called 'Robust PCA on Graphs' which incorporates spectral graph regularization into the Robust PCA framework. Our proposed model benefits from 1) the robustness of principal components to occlusions and missing values, 2) enhanced low-rank recovery, 3) improved clustering property due to the graph smoothness assumption on the low-rank matrix, and 4) convexity of the resulting optimization problem. Extensive experiments on 8 benchmark, 3 video and 2 artificial datasets with corruptions clearly reveal that our model outperforms 10 other state-of-the-art models in its clustering and low-rank recovery tasks.",
"The development of randomized algorithms for numerical linear algebra, e.g. for computing approximate QR and SVD factorizations, has recently become an intense area of research. This paper studies one of the most frequently discussed algorithms in the literature for dimensionality reduction--specifically for approximating an input matrix with a low-rank element. We introduce a novel and rather intuitive analysis of the algorithm in [6], which allows us to derive sharp estimates and give new insights about its performance. This analysis yields theoretical guarantees about the approximation error and at the same time, ultimate limits of performance (lower bounds) showing that our upper bounds are tight. Numerical experiments complement our study and show the tightness of our predictions compared with empirical observations.",
"A lot of works have shown that frobenius-norm-based representation (FNR) is competitive to sparse representation and nuclear-norm-based representation (NNR) in numerous tasks such as subspace clustering. Despite the success of FNR in experimental studies, less theoretical analysis is provided to understand its working mechanism. In this brief, we fill this gap by building the theoretical connections between FNR and NNR. More specially, we prove that: 1) when the dictionary can provide enough representative capacity, FNR is exactly NNR even though the data set contains the Gaussian noise, Laplacian noise, or sample-specified corruption and 2) otherwise, FNR and NNR are two solutions on the column space of the dictionary.",
"We analyze the parallel performance of randomized interpolative decomposition by decomposing low rank complex-valued Gaussian random matrices of about 100 GB. We chose a Cray XMT supercomputer as it provides an almost ideal PRAM model permitting quick investigation of parallel algorithms without obfuscation from hardware idiosyncrasies. We obtain that on non-square matrices performance scales almost linearly with runtime about 100 times faster on 128 processors. We also verify that numerically discovered error bounds still hold on matrices two orders of magnitude larger than those previously tested."
]
} |
1507.08384 | 2951318005 | During the last decade, the matroid secretary problem (MSP) became one of the most prominent classes of online selection problems. Partially linked to its numerous applications in mechanism design, substantial interest arose also in the study of nonlinear versions of MSP, with a focus on the submodular matroid secretary problem (SMSP). So far, O(1)-competitive algorithms have been obtained for SMSP over some basic matroid classes. This created some hope that, analogously to the matroid secretary conjecture, one may even obtain O(1)-competitive algorithms for SMSP over any matroid. However, up to now, most questions related to SMSP remained open, including whether SMSP may be substantially more difficult than MSP; and more generally, to what extend MSP and SMSP are related. Our goal is to address these points by presenting general black-box reductions from SMSP to MSP. In particular, we show that any O(1)-competitive algorithm for MSP, even restricted to a particular matroid class, can be transformed in a black-box way to an O(1)-competitive algorithm for SMSP over the same matroid class. This implies that the matroid secretary conjecture is equivalent to the same conjecture for SMSP. Hence, in this sense SMSP is not harder than MSP. Also, to find O(1)-competitive algorithms for SMSP over a particular matroid class, it suffices to consider MSP over the same matroid class. Using our reductions we obtain many first and improved O(1)-competitive algorithms for SMSP over various matroid classes by leveraging known algorithms for MSP. Moreover, our reductions imply an O(loglog(rank))-competitive algorithm for SMSP, thus, matching the currently best asymptotic algorithm for MSP, and substantially improving on the previously best O(log(rank))-competitive algorithm for SMSP. | Progress has been made on the matroid secretary conjecture for variants of which modify the assumptions on the order in which elements arrive and the way weights are assigned to elements. One simpler variant of is obtained by assuming . Here, an adversary can only choose @math not necessarily distinct weights, and the weights are assigned to the elements uniformly at random. In this model, a @math -competitive algorithm can be obtained for any matroid @cite_19 . Additionally, a @math -competitive procedure can still be obtained in the random weight assignment model even if elements are assumed to arrive in an adversarial order @cite_30 @cite_19 . Hence, this variant is, in a sense, the opposite of the classical , where weights are adversarial and the arrival order is random. Furthermore, a @math -competitive algorithm can be obtained in the so-called . Here, the weight assignment is adversarial; however, the algorithm can choose the order in which elements arrive @cite_9 @cite_11 . Among the above-discussed variants, this is the only variant with adversarial weight assignments for which an @math -competitive algorithm is known. For more information on recent advances on and its variants we refer to the survey @cite_20 . | {
"cite_N": [
"@cite_30",
"@cite_9",
"@cite_19",
"@cite_20",
"@cite_11"
],
"mid": [
"",
"1584886343",
"2085480884",
"1989006984",
"2950067834"
],
"abstract": [
"",
"In the classical prophet inequality, a gambler observes a sequence of stochastic rewards V1, ..., Vn and must decide, for each reward Vi, whether to keep it and stop the game or to forfeit the reward forever and reveal the next value Vi. The gambler's goal is to obtain a constant fraction of the expected reward that the optimal offline algorithm would get. Recently, prophet inequalities have been generalized to settings where the gambler can choose k items, and, more generally, where he can choose any independent set in a matroid. However, all the existing algorithms require the gambler to know the distribution from which the rewards V1, ..., Vn are drawn. The assumption that the gambler knows the distribution from which V1, ..., Vn are drawn is very strong. Instead, we work with the much simpler assumption that the gambler only knows a few samples from this distribution. We construct the first single-sample prophet inequalities for many settings of interest, whose guarantees all match the best possible asymptotically, even with full knowledge of the distribution. Specifically, we provide a novel single-sample algorithm when the gambler can choose any k elements whose analysis is based on random walks with limited correlation. In addition, we provide a black-box method for converting specific types of solutions to the related secretary problem to single-sample prophet inequalities, and apply it to several existing algorithms. Finally, we provide a constant-sample prophet inequality for constant-degree bipartite matchings. In addition, we apply these results to design the first posted-price and multi-dimensional auction mechanisms with limited information in settings with asymmetric bidders. Connections between prophet inequalities and posted-price mechanisms are already known, but applying the existing framework requires knowledge of the underlying distributions, as well as the so-called \"virtual values\" even when the underlying prophet inequalities do not. We therefore provide an extension of this framework that bypasses virtual values altogether, allowing our mechanisms to take full advantage of the limited information required by our new prophet inequalities.",
"In Martin Gardner's Mathematical Games column in the February 1960 issue of Scientific American, there appeared a simple problem that has come to be known today as the Secretary Problem, or the Marriage Problem. It has since been taken up and developed by many eminent probabilists and statisticians and has been extended and generalized in many different directions so that now one can say that it constitutes a \"field\" within mathematics-probability-optimization. The object of this article is partly historical (to give a fresh view of the origins of the problem, touching upon Cayley and Kepler), partly review of the field (listing the subfields of recent interest), partly serious (to answer the question posed in the title), and partly entertainment. The contents of this paper were first given as the Allen T. Craig lecture at the University of Iowa, 1988.",
"The matroid secretary problem was introduced by Babaioff, Immorlica, and Kleinberg in SODA 2007 as an online problem that was both mathematically interesting and had applications to online auctions. In this column I will introduce and motivate this problem, and give a survey of some of the exciting work that has been done on it over the past 6 years. While we have a much better understanding of matroid secretary now than we did in 2007, the main conjecture is still open: does there exist an O(1)-competitive algorithm.",
"The most well-known conjecture in the context of matroid secretary problems claims the existence of a constant-factor approximation applicable to any matroid. Whereas this conjecture remains open, modified forms of it were shown to be true, when assuming that the assignment of weights to the secretaries is not adversarial but uniformly random (Soto [SODA 2011], Oveis Gharan and Vondr 'ak [ESA 2011]). However, so far, there was no variant of the matroid secretary problem with adversarial weight assignment for which a constant-factor approximation was found. We address this point by presenting a 9-approximation for the , a model suggested shortly after the introduction of the matroid secretary problem, and for which no constant-factor approximation was known so far. The free order model is a relaxed version of the original matroid secretary problem, with the only difference that one can choose the order in which secretaries are interviewed. Furthermore, we consider the classical matroid secretary problem for the special case of laminar matroids. Only recently, a constant-factor approximation has been found for this case, using a clever but rather involved method and analysis (Im and Wang, [SODA 2011]) that leads to a 16000 3-approximation. This is arguably the most involved special case of the matroid secretary problem for which a constant-factor approximation is known. We present a considerably simpler and stronger @math -approximation, based on reducing the problem to a matroid secretary problem on a partition matroid."
]
} |
1507.07677 | 1668283537 | The Stackelberg equilibrium solution concept describes optimal strategies to commit to: Player 1 (termed the leader) publicly commits to a strategy and Player 2 (termed the follower) plays a best response to this strategy (ties are broken in favor of the leader). We study Stackelberg equilibria in finite sequential games (or extensive-form games) and provide new exact algorithms, approximate algorithms, and hardness results for several classes of these sequential games. | There is a rich body of literature studying the problem of computing Stackelberg equilibria. The computational complexity of the problem is known for one-shot games @cite_4 , Bayesian games @cite_4 , and selected subclasses of extensive-form games @cite_15 and infinite stochastic games @cite_12 @cite_22 @cite_7 . Similarly, many practical algorithms are also known and typically based on solving multiple linear programs @cite_4 , or mixed-integer linear programs for Bayesian @cite_17 and extensive-form games @cite_11 . | {
"cite_N": [
"@cite_11",
"@cite_4",
"@cite_22",
"@cite_7",
"@cite_15",
"@cite_12",
"@cite_17"
],
"mid": [
"2194580937",
"1971309210",
"2592680824",
"2135382216",
"2055887369",
"137723021",
"2110504778"
],
"abstract": [
"Stackelberg equilibrium is a solution concept prescribing for a player an optimal strategy to commit to, assuming the opponent knows this commitment and plays the best response. Although this solution concept is a cornerstone of many security applications, the existing works typically do not consider situations where the players can observe and react to the actions of the opponent during the course of the game. We extend the existing algorithmic work to extensive-form games and introduce novel algorithm for computing Stackelberg equilibria that exploits the compact sequence-form representation of strategies. Our algorithm reduces the size of the linear programs from exponential in the baseline approach to linear in the size of the game tree. Experimental evaluation on randomly generated games and a security-inspired search game demonstrates significant improvement in the scalability compared to the baseline approach.",
"In multiagent systems, strategic settings are often analyzed under the assumption that the players choose their strategies simultaneously. However, this model is not always realistic. In many settings, one player is able to commit to a strategy before the other player makes a decision. Such models are synonymously referred to as leadership, commitment, or Stackelberg models, and optimal play in such models is often significantly different from optimal play in the model where strategies are selected simultaneously.The recent surge in interest in computing game-theoretic solutions has so far ignored leadership models (with the exception of the interest in mechanism design, where the designer is implicitly in a leadership position). In this paper, we study how to compute optimal strategies to commit to under both commitment to pure strategies and commitment to mixed strategies, in both normal-form and Bayesian games. We give both positive results (efficient algorithms) and negative results (NP-hardness results).",
"This thesis studies various equilibrium concepts in the context of finite games of infinite duration and in the context of bi-matrix games. We considered the game settings where a special player - the leader - assigns the strategy profile to herself and to every other player in the game alike. The leader is given the leeway to benefit from deviation in a strategy profile whereas no other player is allowed to do so. These leader strategy profiles are asymmetric but stable as the stability of strategy profiles is considered w.r.t. all other players. The leader can further incentivise the strategy choices of other players by transferring a share of her own payoff to them that results in incentive strategy profiles. Among these class of strategy profiles, an 'optimal' leader resp. incentive strategy profile would give maximal reward to the leader and is a leader resp. incentive equilibrium. We note that computing leader and incentive equilibrium is no more expensive than computing Nash equilibrium. For multi-player non-terminating games, their complexity is NP complete in general and equals the complexity of computing two-player games when the number of players is kept fixed. We establish the use of memory and study the effect of increasing the memory size in leader strategy profiles in the context of discounted sum games. We discuss various follower behavioural models in bi-matrix games assuming both friendly follower and an adversarial follower. This leads to friendly incentive equilibrium and secure incentive equilibrium for the resp. follower behaviour. While the construction of friendly incentive equilibrium is tractable and straight forward the secure incentive equilibrium needs a constructive approach to establish their existence and tractability. Our overall observation is that the leader return in an incentive equilibrium is always higher (or equal to) her return in a leader equilibrium that in turn would provide higher or equal leader return than from a Nash equilibrium. Optimal strategy profiles assigned this way therefore prove beneficial for the leader.",
"In this paper, we establish the existence of optimal bounded memory strategy profiles in multi-player discounted sum games. We introduce a non-deterministic approach to compute optimal strategy profiles with bounded memory. Our approach can be used to obtain optimal rewards in a setting where a powerful player selects the strategies of all players for Nash and leader equilibria, where in leader equilibria the Nash condition is waived for the strategy of this powerful player. The resulting strategy profiles are optimal for this player among all strategy profiles that respect the given memory bound, and the related decision problem is NP-complete. We also provide simple examples, which show that having more memory will improve the optimal strategy profile, and that sufficient memory to obtain optimal strategy profiles cannot be inferred from the structure of the game.",
"Computing optimal strategies to commit to in general normal-form or Bayesian games is a topic that has recently been gaining attention, in part due to the application of such algorithms in various security and law enforcement scenarios. In this paper, we extend this line of work to the more general case of commitment in extensive-form games. We show that in some cases, the optimal strategy can be computed in polynomial time; in others, computing it is NP-hard.",
"Significant progress has been made recently in the following two lines of research in the intersection of AI and game theory: (1) the computation of optimal strategies to commit to (Stackelberg strategies), and (2) the computation of correlated equilibria of stochastic games. In this paper, we unite these two lines of research by studying the computation of Stackelberg strategies in stochastic games. We provide theoretical results on the value of being able to commit and the value of being able to correlate, as well as complexity results about computing Stackelberg strategies in stochastic games. We then modify the QPACE algorithm ( 2011) to compute Stackelberg strategies, and provide experimental results.",
"In a class of games known as Stackelberg games, one agent (the leader) must commit to a strategy that can be observed by the other agent (the follower or adversary) before the adversary chooses its own strategy. We consider Bayesian Stackelberg games, in which the leader is uncertain about the types of adversary it may face. Such games are important in security domains, where, for example, a security agent (leader) must commit to a strategy of patrolling certain areas, and a robber (follower) has a chance to observe this strategy over time before choosing its own strategy of where to attack. This paper presents an efficient exact algorithm for finding the optimal strategy for the leader to commit to in these games. This algorithm, DOBSS, is based on a novel and compact mixed-integer linear programming formulation. Compared to the most efficient algorithm known previously for this problem, DOBSS is not only faster, but also leads to higher quality solutions, and does not suffer from problems of infeasibility that were faced by this previous algorithm. Note that DOBSS is at the heart of the ARMOR system that is currently being tested for security scheduling at the Los Angeles International Airport."
]
} |
1507.07677 | 1668283537 | The Stackelberg equilibrium solution concept describes optimal strategies to commit to: Player 1 (termed the leader) publicly commits to a strategy and Player 2 (termed the follower) plays a best response to this strategy (ties are broken in favor of the leader). We study Stackelberg equilibria in finite sequential games (or extensive-form games) and provide new exact algorithms, approximate algorithms, and hardness results for several classes of these sequential games. | For one-shot games, the problem of computing a Stackelberg equilibrium is polynomial @cite_4 in contrast to the PPAD-completeness of a Nash equilibrium @cite_1 @cite_16 . The situation changes in extensive-form games where Letchford and Conitzer showed @cite_15 that for many cases the problem is NP-hard, while it still remains PPAD-complete for a Nash equilibrium @cite_23 . More specifically, computing Stackelberg equilibria is polynomial only for: games with perfect information with no chance on DAGs where the leader commits to a pure strategy, games with perfect information with no chance on trees. Introducing chance or imperfect information leads to NP-hardness. However, several cases were unexplored by the existing work, namely extensive-form games with perfect information and concurrent moves. We address this subclass in this work. | {
"cite_N": [
"@cite_4",
"@cite_1",
"@cite_23",
"@cite_15",
"@cite_16"
],
"mid": [
"1971309210",
"2140790422",
"2096651633",
"2055887369",
"2057913812"
],
"abstract": [
"In multiagent systems, strategic settings are often analyzed under the assumption that the players choose their strategies simultaneously. However, this model is not always realistic. In many settings, one player is able to commit to a strategy before the other player makes a decision. Such models are synonymously referred to as leadership, commitment, or Stackelberg models, and optimal play in such models is often significantly different from optimal play in the model where strategies are selected simultaneously.The recent surge in interest in computing game-theoretic solutions has so far ignored leadership models (with the exception of the interest in mechanism design, where the designer is implicitly in a leadership position). In this paper, we study how to compute optimal strategies to commit to under both commitment to pure strategies and commitment to mixed strategies, in both normal-form and Bayesian games. We give both positive results (efficient algorithms) and negative results (NP-hardness results).",
"We resolve the question of the complexity of Nash equilibrium by showing that the problem of computing a Nash equilibrium in a game with 4 or more players is complete for the complexity class PPAD. Our proof uses ideas from the recently-established equivalence between polynomial time solvability of normal form games and graphical games, establishing that these kinds of games can simulate a PPAD-complete class of Brouwer functions.",
"A recent sequence of results established that computing Nash equilibria in normal form games is a PPAD-complete problem even in the case of two players [11,6,4]. By extending these techniques we prove a general theorem, showing that, for a far more general class of families of succinctly representable multiplayer games, the Nash equilibrium problem can also be reduced to the two-player case. In view of empirically successful algorithms available for this problem, this is in essence a positive result — even though, due to the complexity of the reductions, it is of no immediate practical significance. We further extend this conclusion to extensive form games and network congestion games, two classes which do not fall into the same succinct representation framework, and for which no positive algorithmic result had been known.",
"Computing optimal strategies to commit to in general normal-form or Bayesian games is a topic that has recently been gaining attention, in part due to the application of such algorithms in various security and law enforcement scenarios. In this paper, we extend this line of work to the more general case of commitment in extensive-form games. We show that in some cases, the optimal strategy can be computed in polynomial time; in others, computing it is NP-hard.",
"We prove that Bimatrix, the problem of finding a Nash equilibrium in a two-player game, is complete for the complexity class PPAD (Polynomial Parity Argument, Directed version) introduced by Papadimitriou in 1991. Our result, building upon the work of [2006a] on the complexity of four-player Nash equilibria, settles a long standing open problem in algorithmic game theory. It also serves as a starting point for a series of results concerning the complexity of two-player Nash equilibria. In particular, we prove the following theorems: —Bimatrix does not have a fully polynomial-time approximation scheme unless every problem in PPAD is solvable in polynomial time. —The smoothed complexity of the classic Lemke-Howson algorithm and, in fact, of any algorithm for Bimatrix is not polynomial unless every problem in PPAD is solvable in randomized polynomial time. Our results also have a complexity implication in mathematical economics: —Arrow-Debreu market equilibria are PPAD-hard to compute."
]
} |
1507.07677 | 1668283537 | The Stackelberg equilibrium solution concept describes optimal strategies to commit to: Player 1 (termed the leader) publicly commits to a strategy and Player 2 (termed the follower) plays a best response to this strategy (ties are broken in favor of the leader). We study Stackelberg equilibria in finite sequential games (or extensive-form games) and provide new exact algorithms, approximate algorithms, and hardness results for several classes of these sequential games. | The computational complexity can also change when the leader commits to correlated strategies. This extension of the Stackelberg notion to correlated strategies appeared in several works @cite_2 @cite_12 @cite_19 . Conitzer and Korzhyk @cite_2 analyzed correlated strategies in one-shot games providing a single linear program for their computation. Letchford @cite_12 showed that the problem of finding optimal correlated strategies to commit to is NP-hard in infinite discounted stochastic games More precisely, that work assumes that the correlated strategies can use a finite history. . Xu @cite_19 focused on using correlated strategies in a real-world security based scenario. | {
"cite_N": [
"@cite_19",
"@cite_12",
"@cite_2"
],
"mid": [
"2194892124",
"137723021",
"2198558401"
],
"abstract": [
"Stackelberg security games have been widely deployed to protect real-world assets. The main solution concept there is the Strong Stackelberg Equilibrium (SSE), which optimizes the defender's random allocation of limited security resources. However, solely deploying the SSE mixed strategy has limitations. In the extreme case, there are security games in which the defender is able to defend all the assets \"almost perfectly\" at the SSE, but she still sustains significant loss. In this paper, we propose an approach for improving the defender's utility in such scenarios. Perhaps surprisingly, our approach is to strategically reveal to the attacker information about the sampled pure strategy. Specifically, we propose a two-stage security game model, where in the first stage the defender allocates resources and the attacker selects a target to attack, and in the second stage the defender strategically reveals local information about that target, potentially deterring the attacker's attack plan. We then study how the defender can play optimally in both stages. We show, theoretically and experimentally, that the two-stage security game model allows the defender to achieve strictly better utility than SSE.",
"Significant progress has been made recently in the following two lines of research in the intersection of AI and game theory: (1) the computation of optimal strategies to commit to (Stackelberg strategies), and (2) the computation of correlated equilibria of stochastic games. In this paper, we unite these two lines of research by studying the computation of Stackelberg strategies in stochastic games. We provide theoretical results on the value of being able to commit and the value of being able to correlate, as well as complexity results about computing Stackelberg strategies in stochastic games. We then modify the QPACE algorithm ( 2011) to compute Stackelberg strategies, and provide experimental results.",
"The standard approach to computing an optimal mixed strategy to commit to is based on solving a set of linear programs, one for each of the follower's pure strategies. We show that these linear programs can be naturally merged into a single linear program; that this linear program can be interpreted as a formulation for the optimal correlated strategy to commit to, giving an easy proof of a result by von Stengel and Zamir that the leader's utility is at least the utility she gets in any correlated equilibrium of the simultaneous-move game; and that this linear program can be extended to compute optimal correlated strategies to commit to in games of three or more players. (Unlike in two-player games, in games of three or more players, the notions of optimal mixed and correlated strategies to commit to are truly distinct.) We give examples, and provide experimental results that indicate that for 50 × 50 games, this approach is usually significantly faster than the multiple-LPs approach."
]
} |
1507.07677 | 1668283537 | The Stackelberg equilibrium solution concept describes optimal strategies to commit to: Player 1 (termed the leader) publicly commits to a strategy and Player 2 (termed the follower) plays a best response to this strategy (ties are broken in favor of the leader). We study Stackelberg equilibria in finite sequential games (or extensive-form games) and provide new exact algorithms, approximate algorithms, and hardness results for several classes of these sequential games. | The detailed analysis of the impact when the leader can commit to correlated strategies has, however, not been investigated sufficiently in the existing work. We address this extension and study the complexity for multiple subclasses of extensive-form games. Our results show that for many cases the problem of computing Stackelberg equilibria in correlated strategies is polynomial compared to the NP-hardness in behavioral strategies. Finally, these theoretical results have also practical algorithmic implications. An algorithm that computes a Stackelberg equilibrium in correlated strategies can be used to compute a Stackelberg equilibrium in behavioral strategies allowing a significant speed-up in computation time @cite_0 . | {
"cite_N": [
"@cite_0"
],
"mid": [
"2480793474"
],
"abstract": [
"Strong Stackelberg Equilibrium (SSE) is a fundamental solution concept in game theory in which one player commits to a strategy, while the other player observes this commitment and plays a best response. We present a new algorithm for computing SSE for two-player extensive-form general-sum games with imperfect information (EFGs) where computing SSE is an NP-hard problem. Our algorithm is based on a correlated version of SSE, known as Stackelberg Extensive-Form Correlated Equilibrium (SEFCE). Our contribution is therefore twofold: (1) we give the first linear program for computing SEFCE in EFGs without chance, (2) we repeatedly solve and modify this linear program in a systematic search until we arrive to SSE. Our new algorithm outperforms the best previous algorithms by several orders of magnitude."
]
} |
1507.07646 | 2951438410 | Understanding how an animal can deform and articulate is essential for a realistic modification of its 3D model. In this paper, we show that such information can be learned from user-clicked 2D images and a template 3D model of the target animal. We present a volumetric deformation framework that produces a set of new 3D models by deforming a template 3D model according to a set of user-clicked images. Our framework is based on a novel locally-bounded deformation energy, where every local region has its own stiffness value that bounds how much distortion is allowed at that location. We jointly learn the local stiffness bounds as we deform the template 3D mesh to match each user-clicked image. We show that this seemingly complex task can be solved as a sequence of convex optimization problems. We demonstrate the effectiveness of our approach on cats and horses, which are highly deformable and articulated animals. Our framework produces new 3D models of animals that are significantly more plausible than methods without learned stiffness. | Works that modify a template 3D model to fit images can be roughly divided into two categories: those that are class-agnostic, and those that have a class-specific deformation prior learned from data or provided by users. Methods that do not use any class-specific prior make use of strong image cues such as silhouettes @cite_43 @cite_50 or contour drawings @cite_19 . These approaches focus on fitting a single 3D model into a single image, while we focus on learning a class-specific prior as we modify the template 3D model to fit multiple images. Recently, introduced an exciting new photo editing tool that allows users to perform 3D manipulation by aligning 3D stock models to 2D images @cite_38 . Our approach complements this application, which is only demonstrated for rigid objects. | {
"cite_N": [
"@cite_19",
"@cite_43",
"@cite_50",
"@cite_38"
],
"mid": [
"",
"2015676227",
"2083951796",
"2157953764"
],
"abstract": [
"",
"Reconstructing the shape of a deformable object from a single image is a challenging problem, even when a 3D template shape is available. Many different methods have been proposed for this problem, however what they have in common is that they are only able to reconstruct the part of the surface which is visible in a reference image. In contrast, we are interested in recovering the full shape of a deformable 3D object. We introduce a new method designed to reconstruct closed surfaces. This type of surface is better suited for representing objects with volume. Our method relies on recent advances in silhouette Based reconstruction methods to obtain the template from a reference image. This template is then deformed in order to fit the measurements of a new input image. We combine an inextensibility prior on the deformation with powerful image measurements, in the form of silhouette and area constraints, to make our method less reliant on point correspondences. We show reconstruction results for different object classes, such as animals or hands, that have not been previously attempted with existing template methods.",
"In this paper, we propose an image driven shape deformation approach for stylizing a 3D mesh using styles learned from existing 2D illustrations. Our approach models a 2D illustration as a planar mesh and represents the shape styles with four components: the object contour, the context curves, user-specified features and local shape details. After the correspondence between the input model and the 2D illustration is established, shape stylization is formulated as a style-constrained differential mesh editing problem. A distinguishing feature of our approach is that it allows users to directly transfer styles from hand-drawn 2D illustrations with individual perception and cognition, which are difficult to identify and create with 3D modeling and editing approaches. We present a sequence of challenging examples including unrealistic and exaggerated paintings to illustrate the effectiveness of our approach.",
"Photo-editing software restricts the control of objects in a photograph to the 2D image plane. We present a method that enables users to perform the full range of 3D manipulations, including scaling, rotation, translation, and nonrigid deformations, to an object in a photograph. As 3D manipulations often reveal parts of the object that are hidden in the original photograph, our approach uses publicly available 3D models to guide the completion of the geometry and appearance of the revealed areas of the object. The completion process leverages the structure and symmetry in the stock 3D model to factor out the effects of illumination, and to complete the appearance of the object. We demonstrate our system by producing object manipulations that would be impossible in traditional 2D photo-editing programs, such as turning a car over, making a paper-crane flap its wings, or manipulating airplanes in a historical photograph to change its story."
]
} |
1507.07646 | 2951438410 | Understanding how an animal can deform and articulate is essential for a realistic modification of its 3D model. In this paper, we show that such information can be learned from user-clicked 2D images and a template 3D model of the target animal. We present a volumetric deformation framework that produces a set of new 3D models by deforming a template 3D model according to a set of user-clicked images. Our framework is based on a novel locally-bounded deformation energy, where every local region has its own stiffness value that bounds how much distortion is allowed at that location. We jointly learn the local stiffness bounds as we deform the template 3D mesh to match each user-clicked image. We show that this seemingly complex task can be solved as a sequence of convex optimization problems. We demonstrate the effectiveness of our approach on cats and horses, which are highly deformable and articulated animals. Our framework produces new 3D models of animals that are significantly more plausible than methods without learned stiffness. | More closely related to our approach are works that make use of prior knowledge on how the 3D model can change its shape. Many works assume a prior is provided by users or artists in the form of kinematic skeletons @cite_37 @cite_2 @cite_47 @cite_46 @cite_21 @cite_40 or painted stiffness @cite_11 . Since obtaining such priors from users is expensive, many methods learn deformation models automatically from data @cite_35 @cite_5 @cite_0 @cite_29 @cite_48 @cite_8 @cite_39 . @cite_5 use a set of registered 3D range scans of human bodies in a variety of configurations to construct skeletons using graphical models. Blanz and Vetter @cite_35 learn a morphable model of human faces from 3D scans, where a 3D face is described by a linear combination of basis faces. Given a rough initial alignment, they fit the learned morphable models to images by restricting the model to the space spanned by the learned basis. Similarly @cite_0 @cite_10 learn a statistical model of human bodies from a set of 3D scans. @cite_11 learn the material stiffness of animal meshes by analyzing a set of vertex-aligned 3D meshes in various poses. | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_47",
"@cite_8",
"@cite_48",
"@cite_29",
"@cite_21",
"@cite_39",
"@cite_0",
"@cite_40",
"@cite_2",
"@cite_5",
"@cite_46",
"@cite_10",
"@cite_11"
],
"mid": [
"2237250383",
"2545173102",
"",
"2152005648",
"2034943672",
"1522277130",
"1977039804",
"",
"1989191365",
"",
"1943191679",
"2952279711",
"2150457612",
"1993846356",
"1988242451"
],
"abstract": [
"In this paper, a new technique for modeling textured 3D faces is introduced. 3D faces can either be generated automatically from one or more photographs, or modeled directly through an intuitive user interface. Users are assisted in two key problems of computer aided face modeling. First, new face images or new 3D face models can be registered automatically by computing dense one-to-one correspondence to an internal face model. Second, the approach regulates the naturalness of modeled faces avoiding faces with an “unlikely” appearance. Starting from an example set of 3D face models, we derive a morphable face model by transforming the shape and texture of the examples into a vector space representation. New faces and expressions can be modeled by forming linear combinations of the prototypes. Shape and texture constraints derived from the statistics of our example faces are used to guide manual modeling or automated matching algorithms. We show 3D face reconstructions from single images and their applications for photo-realistic image manipulations. We also demonstrate face manipulations according to complex parameters such as gender, fullness of a face or its distinctiveness.",
"We describe a solution to the challenging problem of estimating human body shape from a single photograph or painting. Our approach computes shape and pose parameters of a 3D human body model directly from monocular image cues and advances the state of the art in several directions. First, given a user-supplied estimate of the subject's height and a few clicked points on the body we estimate an initial 3D articulated body pose and shape. Second, using this initial guess we generate a tri-map of regions inside, outside and on the boundary of the human, which is used to segment the image using graph cuts. Third, we learn a low-dimensional linear model of human shape in which variations due to height are concentrated along a single dimension, enabling height-constrained estimation of body shape. Fourth, we formulate the problem of parametric human shape from shading. We estimate the body pose, shape and reflectance as well as the scene lighting that produces a synthesized body that robustly matches the image evidence. Quantitative experiments demonstrate how smooth shading provides powerful constraints on human shape. We further demonstrate a novel application in which we extract 3D human models from archival photographs and paintings.",
"",
"We introduce an example-based rigging approach to automatically generate linear blend skinning models with skelet al structure. Based on a set of example poses, our approach can output its skeleton, joint positions, linear blend skinning weights, and corresponding bone transformations. The output can be directly used to set up skeleton-based animation in various 3D modeling and animation software as well as game engines. Specifically, we formulate the solving of a linear blend skinning model with a skeleton as an optimization with joint constraints and weight smoothness regularization, and solve it using an iterative rigging algorithm that (i) alternatively updates skinning weights, joint locations, and bone transformations, and (ii) automatically prunes redundant bones that can be generated by an over-estimated bone initialization. Due to the automatic redundant bone pruning, our approach is more robust than existing example-based rigging approaches. Furthermore, in terms of rigging accuracy, even with a single set of parameters, our approach can soundly outperform state of the art methods on various types of experimental datasets including humans, quadrupled animals, and highly deformable models.",
"Recently, it has become increasingly popular to represent animations not by means of a classical skeleton-based model, but in the form of deforming mesh sequences. The reason for this new trend is that novel mesh deformation methods as well as new surface based scene capture techniques offer a great level of flexibility during animation creation. Unfortunately, the resulting scene representation is less compact than skelet al ones and there is not yet a rich toolbox available which enables easy post-processing and modification of mesh animations. To bridge this gap between the mesh-based and the skelet al paradigm, we propose a new method that automatically extracts a plausible kinematic skeleton, skelet al motion parameters, as well as surface skinning weights from arbitrary mesh animations. By this means, deforming mesh sequences can be fully-automatically transformed into fullyrigged virtual subjects. The original input can then be quickly rendered based on the new compact bone and skin representation, and it can be easily modified using the full repertoire of already existing animation tools.",
"In this paper we propose a probabilistic framework that models shape variations and infers dense and detailed 3D shapes from a single silhouette. We model two types of shape variations, the object phenotype variation and its pose variation using two independent Gaussian Process Latent Variable Models (GPLVMs) respectively. The proposed shape variation models are learnt from 3D samples without prior knowledge about object class, e.g. object parts and skeletons, and are combined to fully span the 3D shape space. A novel probabilistic inference algorithm for 3D shape estimation is proposed by maximum likelihood estimates of the GPLVM latent variables and the camera parameters that best fit generated 3D shapes to given silhouettes. The proposed inference involves a small number of latent variables and it is computationally efficient. Experiments on both human body and shark data demonstrate the efficacy of our new approach.",
"In this paper we present a novel real-time algorithm for simultaneous pose and shape estimation for articulated objects, such as human beings and animals. The key of our pose estimation component is to embed the articulated deformation model with exponential-maps-based parametrization into a Gaussian Mixture Model. Benefiting from the probabilistic measurement model, our algorithm requires no explicit point correspondences as opposed to most existing methods. Consequently, our approach is less sensitive to local minimum and well handles fast and complex motions. Extensive evaluations on publicly available datasets demonstrate that our method outperforms most state-of-art pose estimation algorithms with large margin, especially in the case of challenging motions. Moreover, our novel shape adaptation algorithm based on the same probabilistic model automatically captures the shape of the subjects during the dynamic pose estimation process. Experiments show that our shape estimation method achieves comparable accuracy with state of the arts, yet requires neither parametric model nor extra calibration procedure.",
"",
"We introduce the SCAPE method (Shape Completion and Animation for PEople)---a data-driven method for building a human shape model that spans variation in both subject shape and pose. The method is based on a representation that incorporates both articulated and non-rigid deformations. We learn a pose deformation model that derives the non-rigid surface deformation as a function of the pose of the articulated skeleton. We also learn a separate model of variation based on body shape. Our two models can be combined to produce 3D surface models with realistic muscle deformation for different people in different poses, when neither appear in the training set. We show how the model can be used for shape completion --- generating a complete surface mesh given a limited set of markers specifying the target shape. We present applications of shape completion to partial view completion and motion capture animation. In particular, our method is capable of constructing a high-quality animated surface model of a moving person, with realistic muscle deformation, using just a single static scan and a marker motion capture sequence of the person.",
"",
"Estimating 3D human pose from 2D joint locations is central to the analysis of people in images and video. To address the fact that the problem is inherently ill posed, many methods impose a prior over human poses. Unfortunately these priors admit invalid poses because they do not model how joint-limits vary with pose. Here we make two key contributions. First, we collect a motion capture dataset that explores a wide range of human poses. From this we learn a pose-dependent model of joint limits that forms our prior. Both dataset and prior are available for research purposes. Second, we define a general parametrization of body pose and a new, multi-stage, method to estimate 3D pose from 2D joint locations using an over-complete dictionary of poses. Our method shows good generalization while avoiding impossible poses. We quantitatively compare our method with recent work and show state-of-the-art results on 2D to 3D pose estimation using the CMU mocap dataset. We also show superior results using manual annotations on real images and automatic detections on the Leeds sports pose dataset.",
"We address the problem of unsupervised learning of complex articulated object models from 3D range data. We describe an algorithm whose input is a set of meshes corresponding to different configurations of an articulated object. The algorithm automatically recovers a decomposition of the object into approximately rigid parts, the location of the parts in the different object instances, and the articulated object skeleton linking the parts. Our algorithm first registers allthe meshes using an unsupervised non-rigid technique described in a companion paper. It then segments the meshes using a graphical model that captures the spatial contiguity of parts. The segmentation is done using the EM algorithm, iterating between finding a decomposition of the object into rigid parts, and finding the location of the parts in the object instances. Although the graphical model is densely connected, the object decomposition step can be performed optimally and efficiently, allowing us to identify a large number of object parts while avoiding local maxima. We demonstrate the algorithm on real world datasets, recovering a 15-part articulated model of a human puppet from just 7 different puppet configurations, as well as a 4 part model of a fiexing arm where significant non-rigid deformation was present.",
"Capturing the motion of two hands interacting with an object is a very challenging task due to the large number of degrees of freedom, self-occlusions, and similarity between the fingers, even in the case of multiple cameras observing the scene. In this paper we propose to use discriminatively learned salient points on the fingers and to estimate the finger-salient point associations simultaneously with the estimation of the hand pose. We introduce a differentiable objective function that also takes edges, optical flow and collisions into account. Our qualitative and quantitative evaluations show that the proposed approach achieves very accurate results for several challenging sequences containing hands and objects in action.",
"A circuit for controlling a display panel identifying malfunctions in an engine generator receives a plurality of electrical signals from the engine generator, each of which identifies a particular trouble. The electrical signal may be produced by closing a switch. It is caused to operate a latch that lights a light associated with the particular malfunction. Indications of other malfunctions are suppressed until the circuit is reset. A manual reset tests all lights and then leaves them off ready to respond. A power-up reset does not test lights but leaves all lights off ready to respond. The circuit is rendered especially appropriate for military use by hardening against radiation and against pulses of electromagnetic interference.",
"Most real world objects consist of non-uniform materials; as a result, during deformation the bending and shearing are distributed non-uniformly and depend on the local stiffness of the material. In the virtual environment there are three prevalent approaches to model deformation: purely geometric, physically driven, and skeleton based. This paper proposes a new approach to model deformation that incorporates non-uniform materials into the geometric deformation framework. Our approach provides a simple and intuitive method to control the distribution of the bending and shearing throughout the model according to the local material stiffness. Thus, we are able to generate realistic looking, material-aware deformations at interactive rates. Our method works on all types of models, including models with continuous stiffness gradation and non-articulated models such as cloth. The material stiffness across the surface can be specified by the user with an intuitive paint-like interface or it can be learned from a sequence of sample deformations."
]
} |
1507.07646 | 2951438410 | Understanding how an animal can deform and articulate is essential for a realistic modification of its 3D model. In this paper, we show that such information can be learned from user-clicked 2D images and a template 3D model of the target animal. We present a volumetric deformation framework that produces a set of new 3D models by deforming a template 3D model according to a set of user-clicked images. Our framework is based on a novel locally-bounded deformation energy, where every local region has its own stiffness value that bounds how much distortion is allowed at that location. We jointly learn the local stiffness bounds as we deform the template 3D mesh to match each user-clicked image. We show that this seemingly complex task can be solved as a sequence of convex optimization problems. We demonstrate the effectiveness of our approach on cats and horses, which are highly deformable and articulated animals. Our framework produces new 3D models of animals that are significantly more plausible than methods without learned stiffness. | One of the biggest drawbacks in learning from 3D data is that it requires a large set of registered 3D models or scans, which is considerably more challenging to obtain compared to a set of user-clicked photographs. All of these methods rely on 3D data with the exception of @cite_30 . They learn a morphable model of non-rigid objects such as dolphins from annotated 2D images and a template 3D model. Our work is complementary to their approach in that they focus on intra-class shape variation such as fat vs thin dolphins, while we focus on deformations and articulations due to pose changes. The use of a morphable model also makes their approach not suitable for objects undergoing large articulations. | {
"cite_N": [
"@cite_30"
],
"mid": [
"2066090933"
],
"abstract": [
"3D morphable models are low-dimensional parameterizations of 3D object classes which provide a powerful means of associating 3D geometry to 2D images. However, morphable models are currently generated from 3D scans, so for general object classes such as animals they are economically and practically infeasible. We show that, given a small amount of user interaction (little more than that required to build a conventional morphable model), there is enough information in a collection of 2D pictures of certain object classes to generate a full 3D morphable model, even in the absence of surface texture. The key restriction is that the object class should not be strongly articulated, and that a very rough rigid model should be provided as an initial estimate of the “mean shape.” The model representation is a linear combination of subdivision surfaces, which we fit to image silhouettes and any identifiable key points using a novel combined continuous-discrete optimization strategy. Results are demonstrated on several natural object classes, and show that models of rather high quality can be obtained from this limited information."
]
} |
1507.07646 | 2951438410 | Understanding how an animal can deform and articulate is essential for a realistic modification of its 3D model. In this paper, we show that such information can be learned from user-clicked 2D images and a template 3D model of the target animal. We present a volumetric deformation framework that produces a set of new 3D models by deforming a template 3D model according to a set of user-clicked images. Our framework is based on a novel locally-bounded deformation energy, where every local region has its own stiffness value that bounds how much distortion is allowed at that location. We jointly learn the local stiffness bounds as we deform the template 3D mesh to match each user-clicked image. We show that this seemingly complex task can be solved as a sequence of convex optimization problems. We demonstrate the effectiveness of our approach on cats and horses, which are highly deformable and articulated animals. Our framework produces new 3D models of animals that are significantly more plausible than methods without learned stiffness. | Using 2D images requires camera parameters for projecting the deformed 3D models to image coordinates. @cite_30 assume a rough camera initialization is provided by a user, but we estimate the camera parameters directly from user-clicked 2D-to-3D correspondences. There are many works regarding the estimation of camera parameters from image correspondences, and their discussion is outside the scope of this paper. We refer the reader to @cite_24 for more details. | {
"cite_N": [
"@cite_30",
"@cite_24"
],
"mid": [
"2066090933",
"2033819227"
],
"abstract": [
"3D morphable models are low-dimensional parameterizations of 3D object classes which provide a powerful means of associating 3D geometry to 2D images. However, morphable models are currently generated from 3D scans, so for general object classes such as animals they are economically and practically infeasible. We show that, given a small amount of user interaction (little more than that required to build a conventional morphable model), there is enough information in a collection of 2D pictures of certain object classes to generate a full 3D morphable model, even in the absence of surface texture. The key restriction is that the object class should not be strongly articulated, and that a very rough rigid model should be provided as an initial estimate of the “mean shape.” The model representation is a linear combination of subdivision surfaces, which we fit to image silhouettes and any identifiable key points using a novel combined continuous-discrete optimization strategy. Results are demonstrated on several natural object classes, and show that models of rather high quality can be obtained from this limited information.",
"From the Publisher: A basic problem in computer vision is to understand the structure of a real world scene given several images of it. Recent major developments in the theory and practice of scene reconstruction are described in detail in a unified framework. The book covers the geometric principles and how to represent objects algebraically so they can be computed and applied. The authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly."
]
} |
1507.07646 | 2951438410 | Understanding how an animal can deform and articulate is essential for a realistic modification of its 3D model. In this paper, we show that such information can be learned from user-clicked 2D images and a template 3D model of the target animal. We present a volumetric deformation framework that produces a set of new 3D models by deforming a template 3D model according to a set of user-clicked images. Our framework is based on a novel locally-bounded deformation energy, where every local region has its own stiffness value that bounds how much distortion is allowed at that location. We jointly learn the local stiffness bounds as we deform the template 3D mesh to match each user-clicked image. We show that this seemingly complex task can be solved as a sequence of convex optimization problems. We demonstrate the effectiveness of our approach on cats and horses, which are highly deformable and articulated animals. Our framework produces new 3D models of animals that are significantly more plausible than methods without learned stiffness. | There is a rich variety of mesh deformation techniques in the literature @cite_52 @cite_45 @cite_49 @cite_13 @cite_17 . The main idea is to minimize some form of deformation objective that governs the way the mesh is modified according to user-supplied positional constraints. Common objectives are minimization of the elastic energy @cite_31 or preservation of local differential properties @cite_36 . The solution can be constrained to lie in the space of natural deformations, which are learned from exemplar meshes @cite_6 @cite_12 @cite_22 @cite_16 @cite_11 . Our approach is related to these methods, except that we learn the space of deformations from a set of annotated 2D images. @cite_52 offers an excellent survey on linear surface deformation methods. While simple and efficient to use, surface deformation methods suffer from unnatural volumetric changes for large deformations @cite_45 @cite_49 . Our work is based on a volumetric representation, which we discuss in detail in the next section. | {
"cite_N": [
"@cite_11",
"@cite_22",
"@cite_36",
"@cite_52",
"@cite_6",
"@cite_45",
"@cite_49",
"@cite_31",
"@cite_16",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"1988242451",
"",
"2161578027",
"2166575557",
"2122007052",
"2142326458",
"2121819464",
"1989871863",
"",
"2047947369",
"2016772237",
""
],
"abstract": [
"Most real world objects consist of non-uniform materials; as a result, during deformation the bending and shearing are distributed non-uniformly and depend on the local stiffness of the material. In the virtual environment there are three prevalent approaches to model deformation: purely geometric, physically driven, and skeleton based. This paper proposes a new approach to model deformation that incorporates non-uniform materials into the geometric deformation framework. Our approach provides a simple and intuitive method to control the distribution of the bending and shearing throughout the model according to the local material stiffness. Thus, we are able to generate realistic looking, material-aware deformations at interactive rates. Our method works on all types of models, including models with continuous stiffness gradation and non-articulated models such as cloth. The material stiffness across the surface can be specified by the user with an intuitive paint-like interface or it can be learned from a sequence of sample deformations.",
"",
"One of the main challenges in editing a mesh is to retain the visual appearance of the surface after applying various modifications. In this paper we advocate the use of linear differential coordinates as means to preserve the high-frequency detail of the surface. The differential coordinates represent the details and are defined by a linear transformation of the mesh vertices. This allows the reconstruction of the edited surface by solving a linear system that satisfies the reconstruction of the local details in least squares sense. Since the differential coordinates are defined in a global coordinate system they are not rotation-invariant. To compensate for that, we rotate them to agree with the rotation of an approximated local frame. We show that the linear least squares system can be solved fast enough to guarantee interactive response time thanks to a precomputed factorization of the coefficient matrix. We demonstrate that our approach enables to edit complex detailed meshes while keeping the shape of the details in their natural orientation.",
"This survey reviews the recent advances in linear variational mesh deformation techniques. These methods were developed for editing detailed high-resolution meshes like those produced by scanning real-world objects. The challenge of manipulating such complex surfaces is threefold: The deformation technique has to be sufficiently fast, robust, intuitive, and easy to control to be useful for interactive applications. An intuitive and, thus, predictable deformation tool should provide physically plausible and aesthetically pleasing surface deformations, which, in particular, requires its geometric details to be preserved. The methods that we survey generally formulate surface deformation as a global variational optimization problem that addresses the differential properties of the edited surface. Efficiency and robustness are achieved by linearizing the underlying objective functional such that the global optimization amounts to solving a sparse linear system of equations. We review the different deformation energies and detail preservation techniques that were proposed in recent years, together with the various techniques to rectify the linearization artifacts. Our goal is to provide the reader with a systematic classification and comparative description of the different techniques, revealing the strengths and weaknesses of each approach in common editing scenarios.",
"Deformation transfer applies the deformation exhibited by a source triangle mesh onto a different target triangle mesh. Our approach is general and does not require the source and target to share the same number of vertices or triangles, or to have identical connectivity. The user builds a correspondence map between the triangles of the source and those of the target by specifying a small set of vertex markers. Deformation transfer computes the set of transformations induced by the deformation of the source mesh, maps the transformations through the correspondence from the source to the target, and solves an optimization problem to consistently apply the transformations to the target shape. The resulting system of linear equations can be factored once, after which transferring a new deformation to the target mesh requires only a backsubstitution step. Global properties such as foot placement can be achieved by constraining vertex positions. We demonstrate our method by retargeting full body key poses, applying scanned facial deformations onto a digital character, and remapping rigid and non-rigid animation sequences from one mesh onto another.",
"We present a novel technique for large deformations on 3D meshes using the volumetric graph Laplacian. We first construct a graph representing the volume inside the input mesh. The graph need not form a solid meshing of the input mesh's interior; its edges simply connect nearby points in the volume. This graph's Laplacian encodes volumetric details as the difference between each point in the graph and the average of its neighbors. Preserving these volumetric details during deformation imposes a volumetric constraint that prevents unnatural changes in volume. We also include in the graph points a short distance outside the mesh to avoid local self-intersections. Volumetric detail preservation is represented by a quadric energy function. Minimizing it preserves details in a least-squares sense, distributing error uniformly over the whole deformed mesh. It can also be combined with conventional constraints involving surface positions, details or smoothness, and efficiently minimized by solving a sparse linear system.We apply this technique in a 2D curve-based deformation system allowing novice users to create pleasing deformations with little effort. A novel application of this system is to apply nonrigid and exaggerated deformations of 2D cartoon characters to 3D meshes. We demonstrate our system's potential with several examples.",
"We present a new method for 3D shape modeling that achieves intuitive and robust deformations by emulating physically plausible surface behavior inspired by thin shells and plates. The surface mesh is embedded in a layer of volumetric prisms, which are coupled through non-linear, elastic forces. To deform the mesh, prisms are rigidly transformed to satisfy user constraints while minimizing the elastic energy. The rigidity of the prisms prevents degenerations even under extreme deformations, making the method numerically stable. For the underlying geometric optimization we employ both local and global shape matching techniques. Our modeling framework allows for the specification of various geometrically intuitive parameters that provide control over the physical surface behavior. While computationally more involved than previous methods, our approach significantly improves robustness and simplifies user interaction for large, complex deformations.",
"The theory of elasticity describes deformable materials such as rubber, cloth, paper, and flexible met als. We employ elasticity theory to construct differential equations that model the behavior of non-rigid curves, surfaces, and solids as a function of time. Elastically deformable models are active: they respond in a natural way to applied forces, constraints, ambient media, and impenetrable obstacles. The models are fundamentally dynamic and realistic animation is created by numerically solving their underlying differential equations. Thus, the description of shape and the description of motion are unified.",
"",
"We present an algorithm that generates natural and intuitive deformations via direct manipulation for a wide range of shape representations and editing scenarios. Our method builds a space deformation represented by a collection of affine transformations organized in a graph structure. One transformation is associated with each graph node and applies a deformation to the nearby space. Positional constraints are specified on the points of an embedded object. As the user manipulates the constraints, a nonlinear minimization problem is solved to find optimal values for the affine transformations. Feature preservation is encoded directly in the objective function by measuring the deviation of each transformation from a true rotation. This algorithm addresses the problem of \"embedded deformation\" since it deforms space through direct manipulation of objects embedded within it, while preserving the embedded objects' features. We demonstrate our method by editing meshes, polygon soups, mesh animations, and animated particle systems.",
"The ability to position a small subset of mesh vertices and produce a meaningful overall deformation of the entire mesh is a fundamental task in mesh editing and animation. However, the class of meaningful deformations varies from mesh to mesh and depends on mesh kinematics, which prescribes valid mesh configurations, and a selection mechanism for choosing among them. Drawing an analogy to the traditional use of skeleton-based inverse kinematics for posing skeletons. we define mesh-based inverse kinematics as the problem of finding meaningful mesh deformations that meet specified vertex constraints.Our solution relies on example meshes to indicate the class of meaningful deformations. Each example is represented with a feature vector of deformation gradients that capture the affine transformations which individual triangles undergo relative to a reference pose. To pose a mesh, our algorithm efficiently searches among all meshes with specified vertex positions to find the one that is closest to some pose in a nonlinear span of the example feature vectors. Since the search is not restricted to the span of example shapes, this produces compelling deformations even when the constraints require poses that are different from those observed in the examples. Furthermore, because the span is formed by a nonlinear blend of the example feature vectors, the blending component of our system may also be used independently to pose meshes by specifying blending weights or to compute multi-way morph sequences.",
""
]
} |
1507.07833 | 1557814434 | Comprehending the virality of a meme can help us in addressing the problems pertaining to disciplines like epidemiology and digital marketing. Therefore, it is not surprising that memetics remains a highly analyzed research topic ever since the mid 1990s. Some scientists choose to investigate the intrinsic contagiousness of a meme while others study the problem from a network theory perspective. In this paper, we revisit the idea of a core-periphery structure and apply it to understand the trajectory of a viral meme in a social network. We have proposed shell-based hill climbing algorithms to determine the path from a periphery shell(where the meme originates) to the core of the network. Further simulations and analysis on the networks behavioral characteristics helped us unearth specialized shells which we term Pseudo-Cores. These shells emulate the behavior of the core in terms of size of the cascade triggered. In our experiments, we have considered two sets for the target nodes, one being core and the other being any of the pseudo-core. We compare our algorithms against already existing path finding algorithms and validate the better performance experimentally. | The information derived from the internet is being harnessed in a myriad of applications today. @cite_5 have used the information potential of a social network to predict epidemics in a population. Social networks act as reservoirs of data which can be used to predict the results of elections @cite_17 as well as patterns in crime @cite_3 . Meme is a term used to describe a unit of information traversing in a network. These memes behave like biological viruses and evolve over time as suggested by in their work @cite_1 . Memetics or the study of memes has a wide range of applications in several research areas like Digital Marketing and Epidemiology. This is not surprising as deciphering patterns in any kind of data or trajectories in information flow in the network can have wide range impacts. If for some reason an information goes “viral” - impacts a large portion of the network, then the meme holds more potential in the network for analysis. | {
"cite_N": [
"@cite_5",
"@cite_1",
"@cite_3",
"@cite_17"
],
"mid": [
"",
"2050316258",
"2037625889",
"2127925090"
],
"abstract": [
"",
"Goffman and Newill1 have directed attention to the analogy between the spreading of an infectious disease and the dissemination of information. We have recently examined the spreading of a rumour from the point of view of mathematical epidemiology and wish to report very briefly here on work to be published in detail elsewhere2. In particular, we must emphasize that a mathematical model for the spreading of rumours can be constructed in a number of different ways, depending on the mechanism postulated to describe the growth and decay of the actual spreading process. In all these models the mathematical techniques familiar in mathematical epidemiology can be applied, but even the qualitative results so obtained need not necessarily be as expected on the basis of the formal analogy with epidemics.",
"This paper describes early work trying to predict stock market indicators such as Dow Jones, NASDAQ and S&P 500 by analyzing Twitter posts. We collected the twitter feeds for six months and got a randomized subsample of about one hundredth of the full volume of all tweets. We measured collective hope and fear on each day and analyzed the correlation between these indices and the stock market indicators. We found that emotional tweet percentage significantly negatively correlated with Dow Jones, NASDAQ and S&P 500, but displayed significant positive correlation to VIX. It therefore seems that just checking on twitter for emotional outbursts of any kind gives a predictor of how the stock market will be doing the next day.",
"To what extend can one use Twitter in opinion polls for political elections? Merely counting Twitter messages mentioning political party names is no guarantee for obtaining good election predictions. By improving the quality of the document collection and by performing sentiment analysis, predictions based on entity counts in tweets can be considerably improved, and become nearly as good as traditionally obtained opinion polls."
]
} |
1507.07789 | 2214506488 | We present an iterative algorithm for solving a class of Laplacian system of equations in @math iterations, where @math is a measure of nonlinearity, @math is the number of variables, @math is the number of nonzero entries in the graph Laplacian @math , @math is the solution accuracy and @math neglects (non-leading) logarithmic terms. This algorithm is a natural nonlinear extension of the one by of Kelner et. al., which solves a linear Laplacian system of equations in nearly linear time. Unlike the linear case, in the nonlinear case each iteration takes @math time so the total running time is @math . For sparse graphs where @math and fixed @math this nonlinear algorithm is @math which is slightly faster than standard methods for solving linear equations, which require approximately @math time. Our analysis relies on the construction of a nonlinear "energy function" and a nonlinear extension of the duality analysis of Kelner et. al to the nonlinear case without any explicit references to spectral analysis or electrical flows. These new insights and results provide tools for more general extensions to spectral theory and nonlinear applications. | Our algorithm is based on Kelner et. al's algorithm @cite_3 , which dramatically reduces the amount of machinery required. Its origins are clearly implicitly based on the more sophisticated machinery in the earlier papers, such as low-stretch spanning trees (e.g., @cite_6 ) and ultra-sparsifiers (e.g., @cite_10 ), but does not require them directly. | {
"cite_N": [
"@cite_10",
"@cite_6",
"@cite_3"
],
"mid": [
"2045107949",
"2072142932",
"2951126601"
],
"abstract": [
"We present algorithms for solving symmetric, diagonally-dominant linear systems to accuracy e in time linear in their number of non-zeros and log (κ f (A) e), where κ f (A) is the condition number of the matrix defining the linear system. Our algorithm applies the preconditioned Chebyshev iteration with preconditioners designed using nearly-linear time algorithms for graph sparsification and graph partitioning.",
"We prove that any graph G=(V,E) with n points and m edges has a spanning tree T such that ∑(u,v)∈ E(G)dT(u,v) = O(m log n log log n). Moreover such a tree can be found in time O(m log n log log n). Our result is obtained using a new pet al-decomposition approach which guarantees that the radius of each cluster in the tree is at most 4 times the radius of the induced subgraph of the cluster in the original graph.",
"In this paper, we present a simple combinatorial algorithm that solves symmetric diagonally dominant (SDD) linear systems in nearly-linear time. It uses very little of the machinery that previously appeared to be necessary for a such an algorithm. It does not require recursive preconditioning, spectral sparsification, or even the Chebyshev Method or Conjugate Gradient. After constructing a \"nice\" spanning tree of a graph associated with the linear system, the entire algorithm consists of the repeated application of a simple (non-recursive) update rule, which it implements using a lightweight data structure. The algorithm is numerically stable and can be implemented without the increased bit-precision required by previous solvers. As such, the algorithm has the fastest known running time under the standard unit-cost RAM model. We hope that the simplicity of the algorithm and the insights yielded by its analysis will be useful in both theory and practice."
]
} |
1507.07301 | 980556636 | Economic Load Dispatch (ELD) is one of the essential components in power system control and operation. Although conventional ELD formulation can be solved using mathematical programming techniques, modern power system introduces new models of the power units which are non-convex, non-differentiable, and sometimes non-continuous. In order to solve such non-convex ELD problems, in this paper we propose a new approach based on the Social Spider Algorithm (SSA). The classical SSA is modified and enhanced to adapt to the unique characteristics of ELD problems, e.g., valve-point effects, multi-fuel operations, prohibited operating zones, and line losses. To demonstrate the superiority of our proposed approach, five widely adopted test systems are employed and the simulation results are compared with the state-of-the-art algorithms. In addition, the parameter sensitivity is illustrated by a series of simulations. The simulation results show that SSA can solve ELD problems effectively and efficiently. | Besides the above non-EA approaches, many EA methods have also been developed to solve various formulations of ELD. Orero and Irving proposed a simple Genetic Algorithm (GA) to solve ELD with POZ @cite_3 . Besides the standard GA, this work also devised a deterministic crowding GA model to solve the problem. Chiang developed an improved GA with the multiplier updating scheme for ELD with VPE and MFO @cite_28 . In this work, the proposed GA is incorporated with an improved evolutionary direction operator. In addition, the tailor-made migration operator efficiently searches the solution space. He proposed a hybrid GA approach to solve ELD with VPE @cite_9 . The algorithm proposed is a hybrid GA with differential evolution (DE) and sequential quadratic programming (SQP). Sinha developed an Evolutionary Programming (EP) method to solve ELP with VPE @cite_18 . Pereira-Neto proposed an Evolutionary Strategy (ES) method to solve ELP with VPE and POZ @cite_24 . DE has also been adapted to solve ELD @cite_43 @cite_0 . | {
"cite_N": [
"@cite_18",
"@cite_28",
"@cite_9",
"@cite_3",
"@cite_24",
"@cite_43",
"@cite_0"
],
"mid": [
"2112592051",
"2139638005",
"1968481105",
"2011212371",
"1971415828",
"2156446695",
"2067675584"
],
"abstract": [
"Evolutionary programming has emerged as a useful optimization tool for handling nonlinear programming problems. Various modifications to the basic method have been proposed with a view to enhance speed and robustness and these have been applied successfully on some benchmark mathematical problems. But few applications have been reported on real-world problems such as economic load dispatch (ELD). The performance of evolutionary programs on ELD problems is examined and presented in this paper in two parts. In Part I, modifications to the basic technique are proposed, where adaptation is based on scaled cost. In Part II, evolutionary programs are developed with adaptation based on an empirical learning rate. Absolute, as well as relative, performance of the algorithms are investigated on ELD problems of different size and complexity having nonconvex cost curves where conventional gradient-based methods are inapplicable.",
"This paper presents an improved genetic algorithm with multiplier updating (IGA spl I.bar MU) to solve power economic dispatch (PED) problems of units with valve-point effects and multiple fuels. The proposed IGA spl I.bar MU integrates the improved genetic algorithm (IGA) and the multiplier updating (MU). The IGA equipped with an improved evolutionary direction operator and a migration operation can efficiently search and actively explore solutions, and the MU is employed to handle the equality and inequality constraints of the PED problem. Few PED problem-related studies have seldom addressed both valve-point loadings and change fuels. To show the advantages of the proposed algorithm, which was applied to test PED problems with one example considering valve-point effects, one example considering multiple fuels, and one example addressing both valve-point effects and multiple fuels. Additionally, the proposed algorithm was compared with previous methods and the conventional genetic algorithm (CGA) with the MU (CGA spl I.bar MU), revealing that the proposed IGA spl I.bar MU is more effective than previous approaches, and applies the realistic PED problem more efficiently than does the CGA spl I.bar MU. Especially, the proposed algorithm is highly promising for the large-scale system of the actual PED operation.",
"Abstract An efficient hybrid genetic algorithm (HGA) approach for solving the economic dispatch problem (EDP) with valve-point effect is presented in this paper. The proposed method combines the GA algorithm with the differential evolution (DE) and sequential quadratic programming (SQP) technique to improve the performance of the algorithm. GA is the main optimizer, while the DE and SQP are used to fine tune in the solution of the GA run. To improve the performance of the SQP, the cost function of EDP is approximated by using a smooth and differentiable function based on the maximum entropy principle. An initial population obtained by using uniform design exerts optimal performance of the proposed hybrid algorithm. The combined algorithm is validated for two test systems consisting of 13 and 40 thermal units whose incremental fuel cost function takes into account the valve-point loading effects. The proposed combined method outperforms other algorithms reported in literatures (EP, EP–SQP, PSO, PSO–SQP) for EDP considering valve-point effects.",
"The work explores the use of a genetic algorithm for the solution of an economic dispatch problem in power systems where some of the units have prohibited operating zones. Genetic algorithms have a capability to provide global optimal solutions in problem domains where a complete traversion of the whole search space is computationally infeasible. Two different implementations of the genetic algorithm for the solution of this dispatch problem are presented: a standard genetic algorithm, and a deterministic crowding genetic algorithm model. The results demonstrate that the genetic algorithm can be applied successfully in the solution of problems represented with nonconvex functions.",
"A simple and efficient algorithm based on evolutionary strategies is proposed for the solution of the economic dispatch (ED) problem with noncontinuous and nonsmooth nonconvex cost functions and with generator constraints being considered. The proposed method solves both, the ED problem with nonconvex nonsmooth cost functions due to valve-point loading and the ED problem that takes into account nonlinear generator characteristics such as ramp-rate limits and prohibited operating zones in the power system operation. The effectiveness of the algorithm is demonstrated using six different test systems and the performance is compared with other relevant methods reported in the literature. In all cases, the proposed algorithm either matches or outperforms the behaviour reported for the existing algorithms. This level of performance is obtained despite the simplicity of the approach.",
"This paper presents a novel stochastic optimisation approach to determining the feasible optimal solution of the economic dispatch (ED) problem considering various generator constraints. Many practical constraints of generators, such as ramp rate limits, prohibited operating zones and the valve point effect, are considered. These constraints make the ED problem a non-smooth non-convex minimisation problem with constraints. The proposed optimisation algorithm is called self-tuning hybrid differential evolution (self-tuning HDE). The self-tuning HDE utilises the concept of the 1 5 success rule of evolution strategies (ESs) in the original HDE to accelerate the search for the global optimum. Three test power systems, including 3-, 13- and 40-unit power systems, are applied to compare the performance of the proposed algorithm with genetic algorithms, the differential evolution algorithm and the HDE algorithm. Numerical results indicate that the entire performance of the proposed self-tuning HDE algorithm outperforms the other three algorithms.",
"In this work, differential evolution (DE) algorithm was studied for solving economic load dispatch (ELD) problems in power systems. DE has proven to be effective in solving many real world constrained optimization problems in different domains. ELD problems are complex and nonlinear in nature with equality and inequality constraints and here special measures were taken to satisfy those. Five ELD problems of different characteristics were used to investigate the effectiveness of the proposal. Comparing with the other existing techniques, the current proposal was found better than, or at least comparable to, them considering the quality of the solution obtained."
]
} |
1507.06841 | 2952127628 | Nowadays, to facilitate the communication and cooperation among employees, a new family of online social networks has been adopted in many companies, which are called the "enterprise social networks" (ESNs). ESNs can provide employees with various professional services to help them deal with daily work issues. Meanwhile, employees in companies are usually organized into different hierarchies according to the relative ranks of their positions. The company internal management structure can be outlined with the organizational chart visually, which is normally confidential to the public out of the privacy and security concerns. In this paper, we want to study the IOC (Inference of Organizational Chart) problem to identify company internal organizational chart based on the heterogeneous online ESN launched in it. IOC is very challenging to address as, to guarantee smooth operations, the internal organizational charts of companies need to meet certain structural requirements (about its depth and width). To solve the IOC problem, a novel unsupervised method Create (ChArT REcovEr) is proposed in this paper, which consists of 3 steps: (1) social stratification of ESN users into different social classes, (2) supervision link inference from managers to subordinates, and (3) consecutive social classes matching to prune the redundant supervision links. Extensive experiments conducted on real-world online ESN dataset demonstrate that Create can perform very well in addressing the IOC problem. | Enterprise social networks are important sources for employees in companies to get reliable information. @cite_30 propose to search for experts in enterprise with both text and social network analysis techniques. They propose to examine the users' dynamic profile information and get the social distance to the expert before deciding how to initiate the contact. Enterprise social networks can lead to lots of benefits to companies and the motivations of enterprise social network adoption in companies are studied in details in @cite_11 . Users in enterprise social networks will connect and learn from each other through personal and professional sharing. People sensemaking and relation building on an enterprise social network site is studied in by @cite_42 . In addition, social connections among users in enterprise social networks usually have multiple facets. @cite_10 propose to study the study the multiplexity of social connections among users in enterprise social networks, which include both professional and personal closeness. | {
"cite_N": [
"@cite_30",
"@cite_42",
"@cite_10",
"@cite_11"
],
"mid": [
"1994838421",
"2106939810",
"2136254842",
"1980580900"
],
"abstract": [
"Employees depend on other people in the enterprise for rapid access to important information. But current systems for finding experts do not adequately address the social implications of finding and engaging strangers in conversation. This paper provides a user study of SmallBlue, a social-context-aware expertise search system that can be used to identify experts, see dynamic profile information and get information about the social distance to the expert, before deciding whether and how to initiate contact. The system uses an innovative approach to privacy to infer content and dynamic social networks from email and chat logs. We describe usage of SmallBlue and discuss implications for the next generation of enterprise-wide systems for finding people.",
"This paper describes a social network site designed to support employees within an enterprise in connecting and learning about each other through personal and professional sharing. We introduce the design concepts and provide a detailed account of the first three months of usage, involving nearly 300 users. Our findings suggest that employees find the site particularly useful as a way to perform people sensemaking of individuals and to connect and maintain relationships with others on the site.",
"In this work we analyze the behavior on a company-internal social network site to determine which interaction patterns signal closeness between colleagues. Regression analysis suggests that employee behavior on social network sites (SNSs) reveals information about both professional and personal closeness. While some factors are predictive of general closeness (e.g. content recommendations), other factors signal that employees feel personal closeness towards their colleagues, but not professional closeness (e.g. mutual profile commenting). This analysis contributes to our understanding of how SNS behavior reflects relationship multiplexity: the multiple facets of our relationships with SNS connections.",
"The introduction of a social networking site inside of a large enterprise enables a new method of communication between colleagues, encouraging both personal and professional sharing inside the protected walls of a company intranet. Our analysis of user behavior and interviews presents the case that professionals use internal social networking to build stronger bonds with their weak ties and to reach out to employees they do not know. Their motivations in doing this include connecting on a personal level with coworkers, advancing their career with the company, and campaigning for their projects."
]
} |
1507.06841 | 2952127628 | Nowadays, to facilitate the communication and cooperation among employees, a new family of online social networks has been adopted in many companies, which are called the "enterprise social networks" (ESNs). ESNs can provide employees with various professional services to help them deal with daily work issues. Meanwhile, employees in companies are usually organized into different hierarchies according to the relative ranks of their positions. The company internal management structure can be outlined with the organizational chart visually, which is normally confidential to the public out of the privacy and security concerns. In this paper, we want to study the IOC (Inference of Organizational Chart) problem to identify company internal organizational chart based on the heterogeneous online ESN launched in it. IOC is very challenging to address as, to guarantee smooth operations, the internal organizational charts of companies need to meet certain structural requirements (about its depth and width). To solve the IOC problem, a novel unsupervised method Create (ChArT REcovEr) is proposed in this paper, which consists of 3 steps: (1) social stratification of ESN users into different social classes, (2) supervision link inference from managers to subordinates, and (3) consecutive social classes matching to prune the redundant supervision links. Extensive experiments conducted on real-world online ESN dataset demonstrate that Create can perform very well in addressing the IOC problem. | Cross-social-network studies has become a hot research topic in recent years. @cite_1 are the first to propose the concepts of anchor links'', anchor users'', 'aligned networks'' etc. A novel network anchoring method is proposed in @cite_1 to address the network alignment problem. Cross-network heterogeneous link prediction problems are studied by @cite_36 @cite_24 @cite_37 @cite_4 by transferring links across partially aligned networks. Besides link prediction problems, proposes to partition multiple large-scale social networks simultaneously in @cite_18 and study the community detection problem across partially aligned networks in @cite_31 @cite_2 . analyze the information diffusion process across aligned networks @cite_21 . | {
"cite_N": [
"@cite_37",
"@cite_18",
"@cite_4",
"@cite_36",
"@cite_21",
"@cite_1",
"@cite_24",
"@cite_2",
"@cite_31"
],
"mid": [
"",
"2075283574",
"",
"2127989284",
"633744573",
"2168936785",
"",
"",
"2405147195"
],
"abstract": [
"",
"Social networks have been part of people's daily life and plenty of users have registered accounts in multiple social networks. Interconnections among multiple social networks add a multiplier effect to social applications when fully used. With the sharp expansion of network size, traditional standalone algorithms can no longer support computing on large scale networks while alternatively, distributed and parallel computing become a solution to utilize the data-intensive information hidden in multiple social networks. As such, synergistic partitioning, which takes the relationships among different networks into consideration and focuses on partitioning the same nodes of different networks into same partitions. With that, the partitions containing the same nodes can be assigned to the same server to improve the data locality and reduce communication overhead among servers, which are very important for distributed applications. To date, there have been limited studies on multiple large scale network partitioning due to three major challenges: 1) the need to consider relationships across multiple networks given the existence of intricate interactions, 2) the difficulty for standalone programs to utilize traditional partitioning methods, 3) the fact that to generate balanced partitions is NP-complete. In this paper, we propose a novel framework to partition multiple social networks synergistically. In particular, we apply a distributed multilevel k-way partitioning method to divide the first network into k partitions. Based on the given anchor nodes which exist in all the social networks and the partition results of the first network, using MapReduce, we then develop a modified distributed multilevel partitioning method to divide other networks. Extensive experiments on two real data sets demonstrate that our method can significantly outperform baseline independent-partitioning method in accuracy and scalability.",
"",
"Online social networks have gained great success in recent years and many of them involve multiple kinds of nodes and complex relationships. Among these relationships, social links among users are of great importance. Many existing link prediction methods focus on predicting social links that will appear in the future among all users based upon a snapshot of the social network. In real-world social networks, many new users are joining in the service every day. Predicting links for new users are more important. Different from conventional link prediction problems, link prediction for new users are more challenging due to the following reasons: (1) differences in information distributions between new users and the existing active users (i.e., old users); (2) lack of information from the new users in the network. We propose a link prediction method called SCAN-PS (Supervised Cross Aligned Networks link prediction with Personalized Sampling), to solve the link prediction problem for new users with information transferred from both the existing active users in the target network and other source networks through aligned accounts. We proposed a within-target-network personalized sampling method to process the existing active users' information in order to accommodate the differences in information distributions before the intra-network knowledge transfer. SCAN-PS can also exploit information in other source networks, where the user accounts are aligned with the target network. In this way, SCAN-PS could solve the cold start problem when information of these new users is total absent in the target network.",
"The influence maximization problem aims at finding a subset of seed users who can maximize the spread of influence in online social networks (OSNs). Existing works mostly focus on one single homogenous network. However, in the real world, OSNs (1) are usually heterogeneous, via which users can influence each others in multiple channels; and (2) share common users, via whom information could propagate across networks.",
"ocation-based social networks (LBSNs) are one kind of online social networks offering geographic services and have been attracting much attention in recent years. LBSNs usually have complex structures, involving heterogeneous nodes and links. Many recommendation services in LBSNs (e.g., friend and location recommendation) can be cast as link prediction problems (e.g., social link and location link prediction). Traditional link prediction researches on LBSNs mostly focus on predicting either social links or location links, assuming the prediction tasks of different types of links to be independent. However, in many real-world LBSNs, the prediction tasks for social links and location links are strongly correlated and mutually influential. Another key challenge in link prediction on LBSNs is the data sparsity problem (i.e., \"new network\" problem), which can be encountered when LBSNs branch into new geographic areas or social groups. Actually, nowadays, many users are involved in multiple networks simultaneously and users who just join one LBSN may have been using other LBSNs for a long time. In this paper, we study the problem of predicting multiple types of links simultaneously for a new LBSN across partially aligned LBSNs and propose a novel method TRAIL (TRAnsfer heterogeneous lInks across LBSNs). TRAIL can accumulate information for locations from online posts and extract heterogeneous features for both social links and location links. TRAIL can predict multiple types of links simultaneously. In addition, TRAIL can transfer information from other aligned networks to the new network to solve the problem of lacking information. Extensive experiments conducted on two real-world aligned LBSNs show that TRAIL can achieve very good performance and substantially outperform the baseline methods.",
"",
"",
"Nowadays, many new social networks offering specific services spring up overnight. In this paper, we want to detect communities for emerging networks. Community detection for emerging networks is very challenging as information in emerging networks is usually too sparse for traditional methods to calculate effective closeness scores among users and achieve good community detection results. Meanwhile, users nowadays usually join multiple social networks simultaneously, some of which are developed and can share common information with the emerging networks. Based on both link and attribution information across multiple networks, a new general closeness measure, intimacy, is introduced in this paper. With both micro and macro controls, an effective and efficient method, CAD (Cold stArt community Detector), is proposed to propagate information from developed network to calculate effective intimacy scores among users in emerging networks. Extensive experiments conducted on real-world social networks demonstrate that CAD can perform very well in addressing the emerging network community detection problem."
]
} |
1507.06462 | 1037300011 | In this paper we revisit some pioneering eorts to equip Petri nets with compact operational models for expressing causality. The models we propose have a bisimilarity relation and a minimal representative for each equivalence class, and they can be fully explained as coalgebras on a presheaf category on an index category of partial orders. First, we provide a set-theoretic model in the form of a a causal case graph, that is a labeled transition system where states and transitions represent markings and rings of the net, respectively, and are equipped with causal information. Most importantly, each state has a poset representing causal dependencies among past events. Our rst result shows the correspondence with behavior structure semantics as proposed by Trakhtenbrot and Rabinovich. Causal case graphs may be innitely-branchi ng and have innitely many states, but we show how they can be rened to get an equivalent nitely-branching model. In it, states only keep the most recent causes for each token, are up to isomorphism, and are equipped with a symmetry, i.e., a group of poset isomorphisms. Symmetries are essential for the existence of a minimal, often nite-state, model. This rst part requires no knowledge of category theory. The next step is constructing a coalgebraic model. We exploit the fact that events can be represented as names, and event generation as name generation. Thus we can apply the Fiore-Turi framework, where the semantics of nominal calculi are modeled as coalgebras over presheaves. We model causal relations as a suitable category of posets with action labels, and generation of new events with causal dependencies as an endofunctor on this category. Presheaves indexed by labeled posets represent the functorial association between states and their causal information. Then we dene a well-behaved category of coalgebras. Our coalgebraic model is still innite-state, but we exploit the equivalence between coalgebras over a class of presheaves and History Dependent automata to derive a compact representation, which is equivalent to our set-theoretical compact model. Remarkably, state reduction is automatically performed along the equivalence. | This paper follows a line of research on coalgebraic models of causality, started in @cite_0 by the same authors. The categorical machinery is the same in both papers, namely presheaf-based coalgebras, HD-automata, and the equivalence among them. However, this paper takes a further step towards a general categorical theory of causality. In @cite_0 , in fact, we have provided models for a particular class of causal LTSs, namely Degano-Darondeau ones. In this paper, instead, we treat Petri nets, which are much more general. For instance, unlike Degano-Darondeau LTSs, Petri nets can describe synchronizations of more than two processes. | {
"cite_N": [
"@cite_0"
],
"mid": [
"1983914828"
],
"abstract": [
"In this paper we recast the classical Darondeau---Degano's causal semantics of concurrency in a coalgebraic setting, where we derive a compact model. Our construction is inspired by the one of Montanari and Pistore yielding causal automata, but we show that it is instance of an existing categorical framework for modeling the semantics of nominal calculi, whose relevance is further demonstrated. The key idea is to represent events as names, and the occurrence of a new event as name generation. We model causal semantics as a coalgebra over a presheaf, along the lines of the Fiore---Turi approach to the semantics of nominal calculi. More specifically, we take a suitable category of finite posets, representing causal relations over events, and we equip it with an endofunctor that allocates new events and relates them to their causes. Presheaves over this category express the relationship between processes and causal relations among the processes' events. We use the allocation operator to define a category of well-behaved coalgebras: it models the occurrence of a new event along each transition. Then we turn the causal transition relation into a coalgebra in this category, where labels only exhibit maximal events with respect to the source states' poset, and we show that its bisimilarity is essentially Darondeau---Degano's strong causal bisimilarity. This coalgebra is still infinite-state, but we exploit the equivalence between coalgebras over a class of presheaves and History Dependent automata to derive a compact representation, where states only retain the poset of the most recent events for each atomic subprocess, and are isomorphic up to order-preserving permutations. Remarkably, this reduction of states is automatically performed along the equivalence."
]
} |
1507.06462 | 1037300011 | In this paper we revisit some pioneering eorts to equip Petri nets with compact operational models for expressing causality. The models we propose have a bisimilarity relation and a minimal representative for each equivalence class, and they can be fully explained as coalgebras on a presheaf category on an index category of partial orders. First, we provide a set-theoretic model in the form of a a causal case graph, that is a labeled transition system where states and transitions represent markings and rings of the net, respectively, and are equipped with causal information. Most importantly, each state has a poset representing causal dependencies among past events. Our rst result shows the correspondence with behavior structure semantics as proposed by Trakhtenbrot and Rabinovich. Causal case graphs may be innitely-branchi ng and have innitely many states, but we show how they can be rened to get an equivalent nitely-branching model. In it, states only keep the most recent causes for each token, are up to isomorphism, and are equipped with a symmetry, i.e., a group of poset isomorphisms. Symmetries are essential for the existence of a minimal, often nite-state, model. This rst part requires no knowledge of category theory. The next step is constructing a coalgebraic model. We exploit the fact that events can be represented as names, and event generation as name generation. Thus we can apply the Fiore-Turi framework, where the semantics of nominal calculi are modeled as coalgebras over presheaves. We model causal relations as a suitable category of posets with action labels, and generation of new events with causal dependencies as an endofunctor on this category. Presheaves indexed by labeled posets represent the functorial association between states and their causal information. Then we dene a well-behaved category of coalgebras. Our coalgebraic model is still innite-state, but we exploit the equivalence between coalgebras over a class of presheaves and History Dependent automata to derive a compact representation, which is equivalent to our set-theoretical compact model. Remarkably, state reduction is automatically performed along the equivalence. | In @cite_0 we start from existing set-theoretic models, similar to abstract CGs, whereas the models we introduce here are novel. In both papers we represent causal dependencies as posets over events, but in @cite_0 events are unlabeled and are canonically represented as natural numbers. Here we have labels and we take a more general approach: instead of choosing specific representatives of events, we make abstract CGs parametric in this choice. This requires more technical work and it further validates the categorical approach, where book-keeping details are abstracted away. The categorical environment in this paper is more elaborate than @cite_0 , due to labeling. In particular, event generation is more complex, and is studied in greater detail. Another difference is that here we give conditions under which the model with only immediate causes is finite, whereas in @cite_0 decidability is not treated. | {
"cite_N": [
"@cite_0"
],
"mid": [
"1983914828"
],
"abstract": [
"In this paper we recast the classical Darondeau---Degano's causal semantics of concurrency in a coalgebraic setting, where we derive a compact model. Our construction is inspired by the one of Montanari and Pistore yielding causal automata, but we show that it is instance of an existing categorical framework for modeling the semantics of nominal calculi, whose relevance is further demonstrated. The key idea is to represent events as names, and the occurrence of a new event as name generation. We model causal semantics as a coalgebra over a presheaf, along the lines of the Fiore---Turi approach to the semantics of nominal calculi. More specifically, we take a suitable category of finite posets, representing causal relations over events, and we equip it with an endofunctor that allocates new events and relates them to their causes. Presheaves over this category express the relationship between processes and causal relations among the processes' events. We use the allocation operator to define a category of well-behaved coalgebras: it models the occurrence of a new event along each transition. Then we turn the causal transition relation into a coalgebra in this category, where labels only exhibit maximal events with respect to the source states' poset, and we show that its bisimilarity is essentially Darondeau---Degano's strong causal bisimilarity. This coalgebra is still infinite-state, but we exploit the equivalence between coalgebras over a class of presheaves and History Dependent automata to derive a compact representation, where states only retain the poset of the most recent events for each atomic subprocess, and are isomorphic up to order-preserving permutations. Remarkably, this reduction of states is automatically performed along the equivalence."
]
} |
1507.06462 | 1037300011 | In this paper we revisit some pioneering eorts to equip Petri nets with compact operational models for expressing causality. The models we propose have a bisimilarity relation and a minimal representative for each equivalence class, and they can be fully explained as coalgebras on a presheaf category on an index category of partial orders. First, we provide a set-theoretic model in the form of a a causal case graph, that is a labeled transition system where states and transitions represent markings and rings of the net, respectively, and are equipped with causal information. Most importantly, each state has a poset representing causal dependencies among past events. Our rst result shows the correspondence with behavior structure semantics as proposed by Trakhtenbrot and Rabinovich. Causal case graphs may be innitely-branchi ng and have innitely many states, but we show how they can be rened to get an equivalent nitely-branching model. In it, states only keep the most recent causes for each token, are up to isomorphism, and are equipped with a symmetry, i.e., a group of poset isomorphisms. Symmetries are essential for the existence of a minimal, often nite-state, model. This rst part requires no knowledge of category theory. The next step is constructing a coalgebraic model. We exploit the fact that events can be represented as names, and event generation as name generation. Thus we can apply the Fiore-Turi framework, where the semantics of nominal calculi are modeled as coalgebras over presheaves. We model causal relations as a suitable category of posets with action labels, and generation of new events with causal dependencies as an endofunctor on this category. Presheaves indexed by labeled posets represent the functorial association between states and their causal information. Then we dene a well-behaved category of coalgebras. Our coalgebraic model is still innite-state, but we exploit the equivalence between coalgebras over a class of presheaves and History Dependent automata to derive a compact representation, which is equivalent to our set-theoretical compact model. Remarkably, state reduction is automatically performed along the equivalence. | A first version of HD-automata for Petri nets, called , has been introduced in @cite_25 . However, their construction is purely set-theoretical and does not include symmetries, so the existence of a minimal model is not guaranteed. This version of HD-automata is similar to what we call immediate causes CG (without symmetries). HD-automata with symmetries were developed for the @math -calculus in @cite_20 @cite_15 , and a general categorical treatment was provided in @cite_2 . In all these cases nominal structures associated to states are just a sets of (event) names, whereas we have posets, which are more adequate to represent causal dependencies. | {
"cite_N": [
"@cite_15",
"@cite_25",
"@cite_20",
"@cite_2"
],
"mid": [
"2077397316",
"1490651621",
"",
"2053839461"
],
"abstract": [
"The coalgebraic framework developed for the classical process algebras, and in particular its advantages concerning minimal realizations, does not fully apply to the π-calculus, due to the constraints on the freshly generated names that appear in the bisimulation.In this paper we propose to model the transition system of the π-calculus as a coalgebra on a category of name permutation algebras and to define its abstract semantics as the final coalgebra of such a category. We show that permutations are sufficient to represent in an explicit way fresh name generation, thus allowing for the definition of minimal realizations.We also link the coalgebraic semantics with a slightly improved version of history dependent (HD) automata, a model developed for verification purposes, where states have local names and transitions are decorated with names and name relations. HD-automata associated with agents with a bounded number of threads in their derivatives are finite and can be actually minimized. We show that the bisimulation relation in the coalgebraic context corresponds to the minimal HD-antomaton.",
"In this paper we propose a new approach to check history-preserving equivalence for Petri nets. Exploiting this approach, history-preserving bisimulation is proved decidable for the class of finite nets which are n-safe for some n (the approaches of [17] and of [8] work just for 1-safe nets). Moreover, since we map nets on ordinary transition systems, standard results and algorithms can be re-used, yielding for instance the possibility of deriving minimal realizations. The proposed approach can be applied also to other concurrent formalisms based on partial order semantics, like CCS with causality [4].",
"",
"The semantics of name-passing calculi is often defined employing coalgebraic models over presheaf categories. This elegant theory lacks finiteness properties, hence it is not apt to implementation. Coalgebras over named sets, called history-dependent automata, are better suited for the purpose due to locality of names. A theory of behavioural functors for named sets is still lacking: the semantics of each language has been given in an ad-hoc way, and algorithms were implemented only for the @p-calculus. Existence of the final coalgebra for the @p-calculus was never proved. We introduce a language of accessible functors to specify history-dependent automata in a modular way, leading to a clean formulation and a generalisation of previous results, and to the proof of existence of a final coalgebra in a wide range of cases."
]
} |
1507.06462 | 1037300011 | In this paper we revisit some pioneering eorts to equip Petri nets with compact operational models for expressing causality. The models we propose have a bisimilarity relation and a minimal representative for each equivalence class, and they can be fully explained as coalgebras on a presheaf category on an index category of partial orders. First, we provide a set-theoretic model in the form of a a causal case graph, that is a labeled transition system where states and transitions represent markings and rings of the net, respectively, and are equipped with causal information. Most importantly, each state has a poset representing causal dependencies among past events. Our rst result shows the correspondence with behavior structure semantics as proposed by Trakhtenbrot and Rabinovich. Causal case graphs may be innitely-branchi ng and have innitely many states, but we show how they can be rened to get an equivalent nitely-branching model. In it, states only keep the most recent causes for each token, are up to isomorphism, and are equipped with a symmetry, i.e., a group of poset isomorphisms. Symmetries are essential for the existence of a minimal, often nite-state, model. This rst part requires no knowledge of category theory. The next step is constructing a coalgebraic model. We exploit the fact that events can be represented as names, and event generation as name generation. Thus we can apply the Fiore-Turi framework, where the semantics of nominal calculi are modeled as coalgebras over presheaves. We model causal relations as a suitable category of posets with action labels, and generation of new events with causal dependencies as an endofunctor on this category. Presheaves indexed by labeled posets represent the functorial association between states and their causal information. Then we dene a well-behaved category of coalgebras. Our coalgebraic model is still innite-state, but we exploit the equivalence between coalgebras over a class of presheaves and History Dependent automata to derive a compact representation, which is equivalent to our set-theoretical compact model. Remarkably, state reduction is automatically performed along the equivalence. | We can cite @cite_8 for the introduction of transitions systems for causality whose states are elements of presheaves, intended to model the causal semantics of the @math -calculus as defined in @cite_9 . However, the index of a state is a set of names, without any information about events and causal relations. The advantage of our index category is that it allows reducing the state-space in an automatic way, exploiting a standard categorical construction. This cannot be done in the framework of @cite_8 . Finally, an HD-automaton for causality has been described in @cite_2 , but it is derived as a direct translation of causal automata and its states do not take into account causal relations. | {
"cite_N": [
"@cite_9",
"@cite_2",
"@cite_8"
],
"mid": [
"2048909687",
"2053839461",
"2037276526"
],
"abstract": [
"We examine the meaning of causality in calculi for mobile processes like the π-calculus, and we investigate the relationship between interleaving and causal semantics for such calculi. We separate two forms of causal dependencies on actions of π-calculus processes, called subject and object dependencies: The former originate from the nesting of prefixes and are propagated through interactions among processes (they are the only form of causal dependencies present in CCS-like languages); the latter originate from the binding mechanisms on names. We propose a notion of causal bisimulation which distinguishes processes which differ for the subject or for the object dependencies. We show that this causal equivalence can be reconducted to, or implemented into, the ordinary interleaving observation equivalence. We prove that our encoding is fully abstract w.r.t. the two behavioural equivalences. This allows us to exploit the simpler theory of the interleaving semantics to reason about the causal one. In [San94b] a similar programme is carried out for location bisimulation [BCHK91], a non-interleaving spatial-sensitive (as opposed to causal-sensitive) behavioural equivalence. The comparison between the encodings of causal bisimulation in this paper, and of location bisimulation in [San94b], evidences the similarities and the differences between these two equivalences.",
"The semantics of name-passing calculi is often defined employing coalgebraic models over presheaf categories. This elegant theory lacks finiteness properties, hence it is not apt to implementation. Coalgebras over named sets, called history-dependent automata, are better suited for the purpose due to locality of names. A theory of behavioural functors for named sets is still lacking: the semantics of each language has been given in an ad-hoc way, and algorithms were implemented only for the @p-calculus. Existence of the final coalgebra for the @p-calculus was never proved. We introduce a language of accessible functors to specify history-dependent automata in a modular way, leading to a clean formulation and a generalisation of previous results, and to the proof of existence of a final coalgebra in a wide range of cases.",
"We study syntax-free models for name-passing processes. For interleaving semantics, we identify the indexing structure required of an early labelled transition system to support the usual π-calculus operations, defining Indexed Labelled Transition Systems. For non-interleaving causal semantics we define Indexed Labelled Asynchronous Transition Systems, smoothly generalizing both our interleaving model and the standard Asynchronous Transition Systems model for CCS-like calculi. In each case we relate a denotational semantics to an operational view, for bisimulation and causal bisimulation respectively. We establish completeness properties of, and adjunctions between, categories of the two models. Alternative indexing structures and possible applications are also discussed. These are first steps towards a uniform understanding of the semantics and operations of name-passing calculi."
]
} |
1507.06462 | 1037300011 | In this paper we revisit some pioneering eorts to equip Petri nets with compact operational models for expressing causality. The models we propose have a bisimilarity relation and a minimal representative for each equivalence class, and they can be fully explained as coalgebras on a presheaf category on an index category of partial orders. First, we provide a set-theoretic model in the form of a a causal case graph, that is a labeled transition system where states and transitions represent markings and rings of the net, respectively, and are equipped with causal information. Most importantly, each state has a poset representing causal dependencies among past events. Our rst result shows the correspondence with behavior structure semantics as proposed by Trakhtenbrot and Rabinovich. Causal case graphs may be innitely-branchi ng and have innitely many states, but we show how they can be rened to get an equivalent nitely-branching model. In it, states only keep the most recent causes for each token, are up to isomorphism, and are equipped with a symmetry, i.e., a group of poset isomorphisms. Symmetries are essential for the existence of a minimal, often nite-state, model. This rst part requires no knowledge of category theory. The next step is constructing a coalgebraic model. We exploit the fact that events can be represented as names, and event generation as name generation. Thus we can apply the Fiore-Turi framework, where the semantics of nominal calculi are modeled as coalgebras over presheaves. We model causal relations as a suitable category of posets with action labels, and generation of new events with causal dependencies as an endofunctor on this category. Presheaves indexed by labeled posets represent the functorial association between states and their causal information. Then we dene a well-behaved category of coalgebras. Our coalgebraic model is still innite-state, but we exploit the equivalence between coalgebras over a class of presheaves and History Dependent automata to derive a compact representation, which is equivalent to our set-theoretical compact model. Remarkably, state reduction is automatically performed along the equivalence. | Other related works are @cite_18 @cite_24 , where event structures have been characterized as (contravariant) presheaves on posets. While the meaning of presheaves is similar, the context is different: we consider the more concrete realm of coalgebras and nominal automata. A more precise correspondence with such models should be worked out. | {
"cite_N": [
"@cite_24",
"@cite_18"
],
"mid": [
"1567819073",
"2116183278"
],
"abstract": [
"The category of event structures is known to embed fully and faithfully in the category of presheaves over pomsets. Here a characterisation of the presheaves represented by event structures is presented. The proof goes via a characterisation of the presheaves represented by event structures when the morphisms on event structures are \"strict\" in that they preserve the partial order of causal dependency.",
"This paper establishes a bridge between presheaf models for concurrency and the more operationally-informative world of event structures. It concentrates on a particular presheaf category, consisting of presheaves over finite partial orders of events; such presheaves form a model of nondeterministic processes in which the computation paths have the shape of partial orders. It is shown how with the introduction of symmetry event structures represent all presheaves over finite partial orders. This is in contrast with plain event structures which only represent certain separated presheaves. Specifically a coreflection from the category of presheaves to the category of event structures with symmetry is exhibited. It is shown how the coreflection can be cut down to an equivalence between the presheaf category and the subcategory of graded event structures with symmetry. Event structures with strong symmetries are shown to represent precisely all the separated presheaves. The broader context and specific applications to the unfolding of higher-dimensional automata and Petri nets, and weak bisimulation on event structures are sketched."
]
} |
1507.06827 | 2240687019 | We consider the egalitarian welfare aspects of random assignment mechanisms when agents have unrestricted cardinal utilities over the objects. We give bounds on how well different random assignment mechanisms approximate the optimal egalitarian value and investigate the effect that different well-known properties like ordinality, envy-freeness, and truthfulness have on the achievable egalitarian value. Finally, we conduct detailed experiments analyzing the tradeoffs between efficiency with envy-freeness or truthfulness using two prominent random assignment mechanisms --- random serial dictatorship and the probabilistic serial mechanism --- for different classes of utility functions and distributions. | The assignment problem has been in the center of attention in recent years in both computer science and economics . Often, in the classical assignment literature, agents are assumed to have an underlying cardinal utility preference structure, even if they are not asked to report it explicitly. On the other hand, there are many examples of well-known cardinal mechanisms, such the pseudo-market (PM) mechanism of and the competitive equilibrium with equal incomes (CEEI) mechanism . Both mechanisms return allocations that are envy-free in expectation. The two prominent ordinal mechanisms in the literature are the probabilistic serial mechanism (PS) and random serial dictator (RSD), a folklore mechanism that pre-existed the formulation of the assignment problem in @cite_9 . Later, proposed a variant of PS called that was formalised and axiomatically studied by . | {
"cite_N": [
"@cite_9"
],
"mid": [
"2008751404"
],
"abstract": [
"In a variety of contexts, individuals must be allocated to positions with limited capacities. Legislators must be assigned to committees, college students to dormitories, and urban homesteaders to dwellings. (A general class of fair division problems would have the positions represent goods.) This paper examines the general problem of achieving efficient allocations when individuals' preferences are unknown and where (as with a growing number of nonmarket allocation schemes) there is no facilitating external medium of exchange such as money. An implicit market procedure is developed that elicits honest preferences, that assigns individuals efficiently, and that is adaptable to a variety of distributional objectives."
]
} |
1507.06867 | 992412171 | Let @math be a finite group. To any family @math of subgroups of @math , we associate a thick @math -ideal @math of the category of @math -spectra with the property that every @math -spectrum in @math (which we call @math -nilpotent) can be reconstructed from its underlying @math -spectra as @math varies over @math . A similar result holds for calculating @math -equivariant homotopy classes of maps into such spectra via an appropriate homotopy limit spectral sequence. In general, the condition @math implies strong collapse results for this spectral sequence as well as its dual homotopy colimit spectral sequence. As applications, we obtain Artin and Brauer type induction theorems for @math -equivariant @math -homology and cohomology, and generalizations of Quillen's @math -isomorphism theorem when @math is a homotopy commutative @math -ring spectrum. We show that the subcategory @math contains many @math -spectra of interest for relatively small families @math . These include @math -equivariant real and complex @math -theory as well as the Borel-equivariant cohomology theories associated to complex oriented ring spectra, any @math -local spectrum, the classical bordism theories, connective real @math -theory, and any of the standard variants of topological modular forms. In each of these cases we identify the minimal family such that these results hold. | In @cite_16 Fausk shows that [Thm. A] HKR00 can be generalized in several ways if one makes some additional assumptions. First, Fausk proves the analogue of thm:gen-artin when @math and @math is a compact Lie group. Moreover, Fausk proves thm:gen-artin when @math (or a closely related ring spectrum), @math is a finite group, @math , and @math is torsion-free (e.g., when @math is a good group in the sense of @cite_29 ). Fausk also obtains generalized Brauer induction theorems in these contexts. Fausk's results do not require a finiteness assumption on @math . | {
"cite_N": [
"@cite_29",
"@cite_16"
],
"mid": [
"1542454678",
"2591650896"
],
"abstract": [
"Let BG be the classifying space of a finite group G. Given a multiplicative cohomology theory E ⁄ , the assignment G 7i! E ⁄ (BG) is a functor from groups to rings, endowed with induction (transfer) maps. In this paper we investigate these functors for complex oriented cohomology theories E ⁄ , using the theory of complex representations of finite groups as a model for what one would like to know. An analogue of Artin's Theorem is proved for all complex oriented E ⁄ : the abelian subgroups of G serve as a detecting family for E ⁄ (BG), modulo torsion dividing the order of G. When E ⁄ is a complete local ring, with residue field of characteristic p and associated formal group of height n, we construct a character ring of class functions that computes 1 E ⁄ (BG). The domain of the characters is Gn,p, the set of n-tuples of elements in G each of which has order a power of p. A formula for induction is also found. The ideas we use are related to the Lubin Tate theory of formal groups. The construction applies to many cohomology theories of current interest: completed versions of elliptic cohomology, E ⁄ n- theory, etc. The nth Morava K-theory Euler characteristic for BG is computed to be the number of G-orbits in Gn.p. For various groups G, including all symmetric groups, we prove that K(n)⁄(BG) concentrated in even degrees. Our results about E⁄(BG) extend to theorems about E⁄(EG◊GX), where X is a finite G-CW complex.",
"Let G be a compact Lie group. We present two induction theorems for certain generalized G-equivariant cohomology theories. The theory applies to G-equivariant K-theory K G , and to the Borel cohomology associated with any complex oriented cohomology theory. The coefficient ring of K G is the representation ring R(G) of G. When G is a finite group the induction theorems for K G coincide with the classical Artin and Brauer induction theorems for R(G)."
]
} |
1507.06527 | 2121092017 | Deep Reinforcement Learning has yielded proficient controllers for complex tasks. However, these controllers have limited memory and rely on being able to perceive the complete game screen at each decision point. To address these shortcomings, this article investigates the effects of adding recurrency to a Deep Q-Network (DQN) by replacing the first post-convolutional fully-connected layer with a recurrent LSTM. The resulting (DRQN), although capable of seeing only a single frame at each timestep, successfully integrates information through time and replicates DQN's performance on standard Atari games and partially observed equivalents featuring flickering game screens. Additionally, when trained with partial observations and evaluated with incrementally more complete observations, DRQN's performance scales as a function of observability. Conversely, when trained with full observations and evaluated with partial observations, DRQN's performance degrades less than DQN's. Thus, given the same length of history, recurrency is a viable alternative to stacking a history of frames in the DQN's input layer and while recurrency confers no systematic advantage when learning to play the game, the recurrent net can better adapt at evaluation time if the quality of observations changes. | Previously, LSTM networks have been demonstrated to solve POMDPs when trained using policy gradient methods @cite_7 . In contrast to policy gradient, our work uses temporal-difference updates to bootstrap an action-value function. Additionally, by jointly training convolutional and LSTM layers we are able to learn directly from pixels and do not require hand-engineered features. | {
"cite_N": [
"@cite_7"
],
"mid": [
"1662842982"
],
"abstract": [
"This paper presents Recurrent Policy Gradients, a modelfree reinforcement learning (RL) method creating limited-memory stochastic policies for partially observable Markov decision problems (POMDPs) that require long-term memories of past observations. The approach involves approximating a policy gradient for a Recurrent Neural Network (RNN) by backpropagating return-weighted characteristic eligibilities through time. Using a \"Long Short-Term Memory\" architecture, we are able to outperform other RL methods on two important benchmark tasks. Furthermore, we show promising results on a complex car driving simulation task."
]
} |
1507.06527 | 2121092017 | Deep Reinforcement Learning has yielded proficient controllers for complex tasks. However, these controllers have limited memory and rely on being able to perceive the complete game screen at each decision point. To address these shortcomings, this article investigates the effects of adding recurrency to a Deep Q-Network (DQN) by replacing the first post-convolutional fully-connected layer with a recurrent LSTM. The resulting (DRQN), although capable of seeing only a single frame at each timestep, successfully integrates information through time and replicates DQN's performance on standard Atari games and partially observed equivalents featuring flickering game screens. Additionally, when trained with partial observations and evaluated with incrementally more complete observations, DRQN's performance scales as a function of observability. Conversely, when trained with full observations and evaluated with partial observations, DRQN's performance degrades less than DQN's. Thus, given the same length of history, recurrency is a viable alternative to stacking a history of frames in the DQN's input layer and while recurrency confers no systematic advantage when learning to play the game, the recurrent net can better adapt at evaluation time if the quality of observations changes. | In parallel to our work, @cite_14 independently combined LSTM with Deep Reinforcement Learning to demonstrate that recurrency helps to better play text-based fantasy games. The approach is similar but the domains differ: despite the apparent complexity of the fantasy-generated text, the underlying MDPs feature relatively low-dimensional manifolds of underlying state space. The more complex of the two games features only 56 underlying states. Atari games, in contrast, feature a much richer state space with typical games having millions of different states. However, the action space of the text games is much larger with a branching factor of 222 versus Atari's 18. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2949801941"
],
"abstract": [
"In this paper, we consider the task of learning control policies for text-based games. In these games, all interactions in the virtual world are through text and the underlying state is not observed. The resulting language barrier makes such environments challenging for automatic game players. We employ a deep reinforcement learning framework to jointly learn state representations and action policies using game rewards as feedback. This framework enables us to map text descriptions into vector representations that capture the semantics of the game states. We evaluate our approach on two game worlds, comparing against baselines using bag-of-words and bag-of-bigrams for state representations. Our algorithm outperforms the baselines on both worlds demonstrating the importance of learning expressive representations."
]
} |
1507.06838 | 2276571577 | Gender classification (GC) has achieved high accuracy in different experimental evaluations based mostly on inner facial details. However, these results do not generalize well in unrestricted datasets and particularly in cross-database experiments, where the performance drops drastically. In this paper, we analyze the state-of-the-art GC accuracy on three large datasets: MORPH, LFW and GROUPS. We discuss their respective difficulties and bias, concluding that the most challenging and wildest complexity is present in GROUPS. This dataset covers hard conditions such as low resolution imagery and cluttered background. Firstly, we analyze in depth the performance of different descriptors extracted from the face and its local context on this dataset. Selecting the bests and studying their most suitable combination allows us to design a solution that beats any previously published results for GROUPS with the Dago's protocol, reaching an accuracy over 94.2 , reducing the gap with other simpler datasets. The chosen solution based on local descriptors is later evaluated in a cross-database scenario with the three mentioned datasets, and full dataset 5-fold cross validation. The achieved results are compared with a Convolutional Neural Network approach, achieving rather similar marks. Finally, a solution is proposed combining both focuses, exhibiting great complementarity, boosting GC performance to beat previously published results in GC both cross-database, and full in-database evaluations. Evaluation of local descriptors for GC in The Images of Groups datasetBroad evaluation of GC in cross-database scenariosComparison with CNNFusion of local descriptors and CNNAchieving state of the art accuracies in large in the wild datasets | In this paper, a first objective is to analyze GC results of different datasets to identify those that are closer to real scenarios. As mentioned above, most state-of-the-art GC approaches focus on the facial pattern. This is evidenced by the latest problem surveys @cite_39 @cite_2 , and recent results in major journals @cite_6 @cite_5 @cite_42 @cite_36 @cite_51 @cite_0 @cite_11 . | {
"cite_N": [
"@cite_36",
"@cite_42",
"@cite_6",
"@cite_39",
"@cite_0",
"@cite_2",
"@cite_5",
"@cite_51",
"@cite_11"
],
"mid": [
"2094248163",
"2023161323",
"2119486225",
"1524707035",
"2060731032",
"2750067310",
"1992793057",
"2158617780",
"2167342253"
],
"abstract": [
"Gender recognition from face images has many applications and is thus an important research topic. This paper presents an approach to gender recognition based on shape, texture and plain intensity features gathered at different scales. We also propose a new dataset for gender evaluation based on images from the UND database. This allows for precise comparison of different algorithms over the same data. The experiments showed that information from different scales, even if just from a single feature, is more important than having information from different features at a single scale. The presented approach is quite competitive with above 90 accuracy in both evaluated datasets.",
"We automatically assemble a big dataset to train a face gender classifier.This is formed by 4 million images and over 60,000 features.The resulting system significantly outperforms the previous state of the art without human annotation.This study lends support to the \"unreasonable effectiveness of data\" conjecture.This study is relevant to computer vision (LBP features, face classification), machine learning (large scale linear classifiers), and big data.This study can serve as a template for other \"web scale\" learning tasks. The application of learning algorithms to big datasets has been identified for a long time as an effective way to attack important tasks in pattern recognition, but the generation of large annotated datasets has a significant cost. We present a simple and effective method to generate a classifier of face images, by training a linear classification algorithm on a massive dataset entirely assembled and labelled by automated means. In doing so, we perform the largest experiment on face gender recognition so far published, reporting the highest performance yet. Four million images and more than 60,000 features are used to train online classifiers. By using an ensemble of linear classifiers, we achieve an accuracy of 96.86 on the most challenging public database, labelled faces in the wild (LFW), 2.05 higher than the previous best result on the same dataset (Shan, 2012). This result is relevant both for the machine learning community, addressing the role of large datasets, and the computer vision community, providing a way to make high quality face gender classifiers. Furthermore, we propose a general way to generate and exploit massive data without human annotation. Finally, we demonstrate a simple and effective adaptation of the Pegasos that makes it more robust.",
"Emerging applications of computer vision and pattern recognition in mobile devices and networked computing require the development of resource-limited algorithms. Linear classification techniques have an important role to play in this context, given their simplicity and low computational requirements. The paper reviews the state-of-the-art in gender classification, giving special attention to linear techniques and their relations. It discusses why linear techniques are not achieving competitive results and shows how to obtain state-of-the-art performances. Our work confirms previous results reporting very close classification accuracies for Support Vector Machines (SVMs) and boosting algorithms on single-database experiments. We have proven that Linear Discriminant Analysis on a linearly selected set of features also achieves similar accuracies. We perform cross-database experiments and prove that single database experiments were optimistically biased. If enough training data and computational resources are available, SVM's gender classifiers are superior to the rest. When computational resources are scarce but there is enough data, boosting or linear approaches are adequate. Finally, if training data and computational resources are very scarce, then the linear approach is the best choice.",
"Applications such as human---computer interaction, surveillance, biometrics and intelligent marketing would benefit greatly from knowledge of the attributes of the human subjects under scrutiny. The gender of a person is one such significant demographic attribute. This paper provides a review of facial gender recognition in computer vision. It is certainly not a trivial task to identify gender from images of the face. We highlight the challenges involved, which can be divided into human factors and those introduced during the image capture process. A comprehensive survey of facial feature extraction methods for gender recognition studied in the past couple of decades is provided. We appraise the datasets used for evaluation of gender classification performance. Based on the results reported, good performance has been achieved for images captured under controlled environments, but certainly there is still much work that can be done to improve the robustness of gender recognition under real-life environments.",
"In this paper, we report our extension of the use of feature selection based on mutual information and feature fusion to improve gender classification of face images. We compare the results of fusing three groups of features, three spatial scales, and four different mutual information measures to select features. We also showed improved results by fusion of LBP features with different radii and spatial scales, and the selection of features using mutual information. As measures of mutual information we use minimum redundancy and maximal relevance (mRMR), normalized mutual information feature selection (NMIFS), conditional mutual information feature selection (CMIFS), and conditional mutual information maximization (CMIM). We tested the results on four databases: FERET and UND, under controlled conditions, the LFW database under unconstrained scenarios, and AR for occlusions. It is shown that selection of features together with fusion of LBP features significantly improved gender classification accuracy compared to previously published results. We also show a significant reduction in processing time because of the feature selection, which makes real-time applications of gender classification feasible.",
"",
"We used both inner and outer face cues.External cues improve classification performance for gender recognition.FIS framework improves classification results when combined with SVM.Unconstrained databases provide better results than that of constrained databases.We obtained 93.35 accuracy on Groups LFW cross-database test. In this paper, we propose a novel gender recognition framework based on a fuzzy inference system (FIS). Our main objective is to study the gain brought by FIS in presence of various visual sensors (e.g., hair, mustache, inner face). We use inner and outer facial features to extract input variables. First, we define the fuzzy statements and then we generate a knowledge base composed of a set of rules over the linguistic variables including hair volume, mustache and a vision-sensor. Hair volume and mustache information are obtained from Part Labels subset of Labeled Faces in the Wild (LFW) database and vision-sensor is obtained from a pixel-intensity based SVM+RBF classifier trained on different databases including Feret, Groups and GENKI-4K. Cross-database test experiments on LFW database showed that the proposed method provides better accuracy than optimized SVM+RBF only classification. We also showed that FIS increases the inter-class variability by decreasing false negatives (FN) and false positives (FP) using expert knowledge. Our experimental results yield an average accuracy of 93.35 using Groups LFW test, while the SVM performance baseline yields 91.25 accuracy.",
"We present a systematic study on gender classification with automatically detected and aligned faces. We experimented with 120 combinations of automatic face detection, face alignment, and gender classification. One of the findings was that the automatic face alignment methods did not increase the gender classification rates. However, manual alignment increased classification rates a little, which suggests that automatic alignment would be useful when the alignment methods are further improved. We also found that the gender classification methods performed almost equally well with different input image sizes. In any case, the best classification rate was achieved with a support vector machine. A neural network and Adaboost achieved almost as good classification rates as the support vector machine and could be used in applications where classification speed is considered more important than the maximum classification accuracy.",
"This paper presents a thorough study of gender classification methodologies performing on neutral, expressive and partially occluded faces, when they are used in all possible arrangements of training and testing roles. A comprehensive comparison of two representation approaches (global and local), three types of features (grey levels, PCA and LBP), three classifiers (1-NN, PCA+LDA and SVM) and two performance measures (CCR and d') is provided over single- and cross-database experiments. Experiments revealed some interesting findings, which were supported by three non-parametric statistical tests: when training and test sets contain different types of faces, local models using the 1-NN rule outperform global approaches, even those using SVM classifiers; however, with the same type of faces, even if the acquisition conditions are diverse, the statistical tests could not reject the null hypothesis of equal performance of global SVMs and local 1-NNs."
]
} |
1507.06838 | 2276571577 | Gender classification (GC) has achieved high accuracy in different experimental evaluations based mostly on inner facial details. However, these results do not generalize well in unrestricted datasets and particularly in cross-database experiments, where the performance drops drastically. In this paper, we analyze the state-of-the-art GC accuracy on three large datasets: MORPH, LFW and GROUPS. We discuss their respective difficulties and bias, concluding that the most challenging and wildest complexity is present in GROUPS. This dataset covers hard conditions such as low resolution imagery and cluttered background. Firstly, we analyze in depth the performance of different descriptors extracted from the face and its local context on this dataset. Selecting the bests and studying their most suitable combination allows us to design a solution that beats any previously published results for GROUPS with the Dago's protocol, reaching an accuracy over 94.2 , reducing the gap with other simpler datasets. The chosen solution based on local descriptors is later evaluated in a cross-database scenario with the three mentioned datasets, and full dataset 5-fold cross validation. The achieved results are compared with a Convolutional Neural Network approach, achieving rather similar marks. Finally, a solution is proposed combining both focuses, exhibiting great complementarity, boosting GC performance to beat previously published results in GC both cross-database, and full in-database evaluations. Evaluation of local descriptors for GC in The Images of Groups datasetBroad evaluation of GC in cross-database scenariosComparison with CNNFusion of local descriptors and CNNAchieving state of the art accuracies in large in the wild datasets | Focusing on large and heterogeneous datasets, we highlight recent results reported for the non public UCN @cite_6 , and the available MORPH @cite_38 , GROUPS @cite_48 , and LFW @cite_17 datasets. Observing the in-database rates in Table , there is not much room for improvement for LFW and MORPH. We argue that both datasets present some level of simplification that benefits the overall accuracy achieved. In fact, both include multiple samples of the same identity, circumstance that clearly mix gender and identity classification. On the other side, GROUPS offers a less restricted scenario, reporting the lowest accuracy, with a large gap compared to other datasets. This evidence has convinced us to focus on this particular dataset, agreeing with the 2015 NIST report conclusion on the topic @cite_2 . We aim at reducing this GC accuracy gap. | {
"cite_N": [
"@cite_38",
"@cite_48",
"@cite_6",
"@cite_2",
"@cite_17"
],
"mid": [
"2118664399",
"2135964285",
"2119486225",
"2750067310",
"1782590233"
],
"abstract": [
"This paper details MORPH a longitudinal face database developed for researchers investigating all facets of adult age-progression, e.g. face modeling, photo-realistic animation, face recognition, etc. This database contributes to several active research areas, most notably face recognition, by providing: the largest set of publicly available longitudinal images; longitudinal spans from a few months to over twenty years; and, the inclusion of key physical parameters that affect aging appearance. The direct contribution of this data corpus for face recognition is highlighted in the evaluation of a standard face recognition algorithm, which illustrates the impact that age-progression, has on recognition rates. Assessment of the efficacy of this algorithm is evaluated against the variables of gender and racial origin. This work further concludes that the problem of age-progression on face recognition (FR) is not unique to the algorithm used in this work.",
"In many social settings, images of groups of people are captured. The structure of this group provides meaningful context for reasoning about individuals in the group, and about the structure of the scene as a whole. For example, men are more likely to stand on the edge of an image than women. Instead of treating each face independently from all others, we introduce contextual features that encapsulate the group structure locally (for each person in the group) and globally (the overall structure of the group). This “social context” allows us to accomplish a variety of tasks, such as such as demographic recognition, calculating scene and camera parameters, and even event recognition. We perform human studies to show this context aids recognition of demographic information in images of strangers.",
"Emerging applications of computer vision and pattern recognition in mobile devices and networked computing require the development of resource-limited algorithms. Linear classification techniques have an important role to play in this context, given their simplicity and low computational requirements. The paper reviews the state-of-the-art in gender classification, giving special attention to linear techniques and their relations. It discusses why linear techniques are not achieving competitive results and shows how to obtain state-of-the-art performances. Our work confirms previous results reporting very close classification accuracies for Support Vector Machines (SVMs) and boosting algorithms on single-database experiments. We have proven that Linear Discriminant Analysis on a linearly selected set of features also achieves similar accuracies. We perform cross-database experiments and prove that single database experiments were optimistically biased. If enough training data and computational resources are available, SVM's gender classifiers are superior to the rest. When computational resources are scarce but there is enough data, boosting or linear approaches are adequate. Finally, if training data and computational resources are very scarce, then the linear approach is the best choice.",
"",
"Most face databases have been created under controlled conditions to facilitate the study of specific parameters on the face recognition problem. These parameters include such variables as position, pose, lighting, background, camera quality, and gender. While there are many applications for face recognition technology in which one can control the parameters of image acquisition, there are also many applications in which the practitioner has little or no control over such parameters. This database, Labeled Faces in the Wild, is provided as an aid in studying the latter, unconstrained, recognition problem. The database contains labeled face photographs spanning the range of conditions typically encountered in everyday life. The database exhibits “natural” variability in factors such as pose, lighting, race, accessories, occlusions, and background. In addition to describing the details of the database, we provide specific experimental paradigms for which the database is suitable. This is done in an effort to make research performed with the database as consistent and comparable as possible. We provide baseline results, including results of a state of the art face recognition system combined with a face alignment system. To facilitate experimentation on the database, we provide several parallel databases, including an aligned version."
]
} |
1507.06838 | 2276571577 | Gender classification (GC) has achieved high accuracy in different experimental evaluations based mostly on inner facial details. However, these results do not generalize well in unrestricted datasets and particularly in cross-database experiments, where the performance drops drastically. In this paper, we analyze the state-of-the-art GC accuracy on three large datasets: MORPH, LFW and GROUPS. We discuss their respective difficulties and bias, concluding that the most challenging and wildest complexity is present in GROUPS. This dataset covers hard conditions such as low resolution imagery and cluttered background. Firstly, we analyze in depth the performance of different descriptors extracted from the face and its local context on this dataset. Selecting the bests and studying their most suitable combination allows us to design a solution that beats any previously published results for GROUPS with the Dago's protocol, reaching an accuracy over 94.2 , reducing the gap with other simpler datasets. The chosen solution based on local descriptors is later evaluated in a cross-database scenario with the three mentioned datasets, and full dataset 5-fold cross validation. The achieved results are compared with a Convolutional Neural Network approach, achieving rather similar marks. Finally, a solution is proposed combining both focuses, exhibiting great complementarity, boosting GC performance to beat previously published results in GC both cross-database, and full in-database evaluations. Evaluation of local descriptors for GC in The Images of Groups datasetBroad evaluation of GC in cross-database scenariosComparison with CNNFusion of local descriptors and CNNAchieving state of the art accuracies in large in the wild datasets | Certainly, the need of facial features restricts the context of application, requiring a visible and almost frontal face. From another point of view, different researchers have recently investigated the inclusion of external facial features @cite_10 @cite_55 such as hair, clothing @cite_35 @cite_47 @cite_8 , their combination with other cues @cite_1 , or even features extracted from the body @cite_16 @cite_28 . The latter claims to be better adapted to real surveillance scenarios where the facial pattern is noisy, not frontal, occluded, or presents low resolution. However, their application is particularly restricted, as no body occlusion may be present. | {
"cite_N": [
"@cite_35",
"@cite_8",
"@cite_28",
"@cite_55",
"@cite_1",
"@cite_47",
"@cite_16",
"@cite_10"
],
"mid": [
"146395692",
"2044030609",
"1498235585",
"2128560777",
"2034821857",
"2097616065",
"2032670732",
"2126448884"
],
"abstract": [
"Describing clothing appearance with semantic attributes is an appealing technique for many important applications. In this paper, we propose a fully automated system that is capable of generating a list of nameable attributes for clothes on human body in unconstrained images. We extract low-level features in a pose-adaptive manner, and combine complementary features for learning attribute classifiers. Mutual dependencies between the attributes are then explored by a Conditional Random Field to further improve the predictions from independent classifiers. We validate the performance of our system on a challenging clothing attribute dataset, and introduce a novel application of dressing style analysis that utilizes the semantic attributes produced by our system.",
"During the last decade, researchers have verified that clothing can provide information for gender recognition. However, before extracting features, it is necessary to segment the clothing region. We introduce a new clothes segmentation method based on the application of the GrabCut technique over a trixel mesh, obtaining very promising results for a close to real time system. Finally, the clothing features are combined with facial and head context information to outperform previous results in gender recognition with a public database.",
"In this paper we study the problem of gender recognition from human body To represent human body images for the purpose of gender recognition, we propose to use the biologically-inspired features in combination with manifold learning techniques A framework is also proposed to deal with the body pose change or view difference in gender classification Various manifold learning techniques are applied to the bio-inspired features and evaluated to show their performance in different cases As a result, different manifold learning methods are used for different tasks, such as the body view classification and gender classification at different views Based on the new representation and classification framework, a gender recognition accuracy of about 80 can be obtained on a public available pedestrian database.",
"We propose a method for recognizing attributes, such as the gender, hair style and types of clothes of people under large variation in viewpoint, pose, articulation and occlusion typical of personal photo album images. Robust attribute classifiers under such conditions must be invariant to pose, but inferring the pose in itself is a challenging problem. We use a part-based approach based on poselets. Our parts implicitly decompose the aspect (the pose and viewpoint). We train attribute classifiers for each such aspect and we combine them together in a discriminative model. We propose a new dataset of 8000 people with annotated attributes. Our method performs very well on this dataset, significantly outperforming a baseline built on the spatial pyramid match kernel method. On gender recognition we outperform a commercial face recognition system.",
"While many works consider moving faces only as collections of frames and apply still image-based methods, recent developments indicate that excellent results can be obtained using texture-based spatiotemporal representations for describing and analyzing faces in videos. Inspired by the psychophysical findings which state that facial movements can provide valuable information to face analysis, and also by our recent success in using LBP (local binary patterns) for combining appearance and motion for dynamic texture analysis, this paper investigates the combination of facial appearance (the shape of the face) and motion (the way a person is talking and moving his her facial features) for face analysis in videos. We propose and study an approach for spatiotemporal face and gender recognition from videos using an extended set of volume LBP features and a boosting scheme. We experiment with several publicly available video face databases and consider different benchmark methods for comparison. Our extensive experimental analysis clearly assesses the promising performance of the LBP-based spatiotemporal representations for describing and analyzing faces in videos.",
"In this paper, we propose a novel gender classification framework, which utilizes not only facial features, but also external information, i.e. hair and clothing. Instead of using the whole face, we consider five facial components: forehead, eyes, nose, mouth and chin. We also design feature extraction methods for hair and clothing; these features have seldom been used in previous work because of their large variability. For each type of feature, we train a single support vector machine classifier with probabilistic output. The outputs of these classifiers are combined using various strategies, namely fuzzy integral, maximal, sum, voting, and product rule. The major contributions of this paper are (1) investigating the gender discriminative ability of clothing information; (2) using facial components instead of the whole face to obtain higher robustness for occlusions and noise; (3) exploiting hair and clothing information to facilitate gender classification. Experimental results show that our proposed framework improves classification accuracy, even when images contain occlusions, noise, and illumination changes.",
"In this paper we focus on building robust image representations for gender classification from full human bodies. We first investigate a number of state-of-the-art image representations with regard to their suitability for gender profiling from static body images. Features include Histogram of Gradients (HOG), spatial pyramid HOG and spatial pyramid bag of words etc. These representations are learnt and combined based on a kernel support vector machine (SVM) classifier. We compare a number of different SVM kernels for this task but conclude that the simple linear kernel appears to give the best overall performance. Our study shows that individual adoption of these representations for gender classification is not as promising as might be expected, given their good performance in the tasks of pedestrian detection on INRIA datasets, and object categorisation on Caltech 101 and Caltech 256 datasets. Our best results, 80 classification accuracy, were achieved from a combination of spatial shape information, captured by HOG, and colour information captured by HSV histogram based features. Additionally, to the best of our knowledge, currently there is no publicly available dataset for full body gender recognition. Hence, we further introduce a novel body gender dataset covering a large diversity of human body appearance.",
"We introduce the use of describable visual attributes for face verification and image search. Describable visual attributes are labels that can be given to an image to describe its appearance. This paper focuses on images of faces and the attributes used to describe them, although the concepts also apply to other domains. Examples of face attributes include gender, age, jaw shape, nose size, etc. The advantages of an attribute-based representation for vision tasks are manifold: They can be composed to create descriptions at various levels of specificity; they are generalizable, as they can be learned once and then applied to recognize new objects or categories without any further training; and they are efficient, possibly requiring exponentially fewer attributes (and training data) than explicitly naming each category. We show how one can create and label large data sets of real-world images to train classifiers which measure the presence, absence, or degree to which an attribute is expressed in images. These classifiers can then automatically label new images. We demonstrate the current effectiveness-and explore the future potential-of using attributes for face verification and image search via human and computational experiments. Finally, we introduce two new face data sets, named FaceTracer and PubFig, with labeled attributes and identities, respectively."
]
} |
1507.06838 | 2276571577 | Gender classification (GC) has achieved high accuracy in different experimental evaluations based mostly on inner facial details. However, these results do not generalize well in unrestricted datasets and particularly in cross-database experiments, where the performance drops drastically. In this paper, we analyze the state-of-the-art GC accuracy on three large datasets: MORPH, LFW and GROUPS. We discuss their respective difficulties and bias, concluding that the most challenging and wildest complexity is present in GROUPS. This dataset covers hard conditions such as low resolution imagery and cluttered background. Firstly, we analyze in depth the performance of different descriptors extracted from the face and its local context on this dataset. Selecting the bests and studying their most suitable combination allows us to design a solution that beats any previously published results for GROUPS with the Dago's protocol, reaching an accuracy over 94.2 , reducing the gap with other simpler datasets. The chosen solution based on local descriptors is later evaluated in a cross-database scenario with the three mentioned datasets, and full dataset 5-fold cross validation. The achieved results are compared with a Convolutional Neural Network approach, achieving rather similar marks. Finally, a solution is proposed combining both focuses, exhibiting great complementarity, boosting GC performance to beat previously published results in GC both cross-database, and full in-database evaluations. Evaluation of local descriptors for GC in The Images of Groups datasetBroad evaluation of GC in cross-database scenariosComparison with CNNFusion of local descriptors and CNNAchieving state of the art accuracies in large in the wild datasets | Indeed, the inclusion of non facial features is consistent with the human vision system that employs external and other features for GC, such as gait, body contours, hair, clothing, voice, etc. @cite_10 @cite_4 . These considerations seem to be of particular interest for degraded, low resolution or noisy images @cite_56 . For those reasons, we will include in our study features extracted from the face, and its local context. | {
"cite_N": [
"@cite_10",
"@cite_4",
"@cite_56"
],
"mid": [
"2126448884",
"2060141076",
"1981638753"
],
"abstract": [
"We introduce the use of describable visual attributes for face verification and image search. Describable visual attributes are labels that can be given to an image to describe its appearance. This paper focuses on images of faces and the attributes used to describe them, although the concepts also apply to other domains. Examples of face attributes include gender, age, jaw shape, nose size, etc. The advantages of an attribute-based representation for vision tasks are manifold: They can be composed to create descriptions at various levels of specificity; they are generalizable, as they can be learned once and then applied to recognize new objects or categories without any further training; and they are efficient, possibly requiring exponentially fewer attributes (and training data) than explicitly naming each category. We show how one can create and label large data sets of real-world images to train classifiers which measure the presence, absence, or degree to which an attribute is expressed in images. These classifiers can then automatically label new images. We demonstrate the current effectiveness-and explore the future potential-of using attributes for face verification and image search via human and computational experiments. Finally, we introduce two new face data sets, named FaceTracer and PubFig, with labeled attributes and identities, respectively.",
"Hair is a feature of the head that frequently changes in different situations. For this reason much research in the area of face perception has employed stimuli without hair. To investigate the effect of the presence of hair we used faces with and without hair in a recognition task. Participants took part in trials in which the state of the hair either remained consistent (Same) or switched between learning and test (Switch). It was found that in the Same trials performance did not differ for stimuli presented with and without hair. This implies that there is sufficient information in the internal features of the face for optimal performance in this task. It was also found that performance in the Switch trials was substantially lower than in the Same trials. This drop in accuracy when the stimuli were switched suggests that faces are represented in a holistic manner and that manipulation of the hair causes disruption to this, with implications for the interpretation of some previous studies.",
""
]
} |
1507.06593 | 2201680753 | We present LDAExplore, a tool to visualize topic distributions in a given document corpus that are generated using Topic Modeling methods. Latent Dirichlet Allocation (LDA) is one of the basic methods that is predominantly used to generate topics. One of the problems with methods like LDA is that users who apply them may not understand the topics that are generated. Also, users may find it difficult to search correlated topics and correlated documents. LDAExplore, tries to alleviate these problems by visualizing topic and word distributions generated from the document corpus and allowing the user to interact with them. The system is designed for users, who have minimal knowledge of LDA or Topic Modelling methods. To evaluate our design, we run a pilot study which uses the abstracts of 322 Information Visualization papers, where every abstract is considered a document. The topics generated are then explored by users. The results show that users are able to find correlated documents and group them based on topics that are similar. | is another analytics system used to visualize how topics evolve @cite_9 . The system uses a tree cut approach with a combination of a word cloud. The word cloud is a standard method to show a word set with word sizes varying according to their frequency (or probability). Similarly, @cite_7 is a technique designed for analyzing large text corpora using TF-IDF. It generates document clusters by hierarchically clustering these distances and encoding the result as a topic tree. @cite_10 uses hierarchy structure such that it shows chapters, sub-chapters, and pages , and then lines of text. It again uses TF-IDF as one of its techniques. @cite_1 is a technique designed for analyzing large text corpora using LDA. They present the results in a rectangle where each row means a document and each column means a topic. | {
"cite_N": [
"@cite_1",
"@cite_9",
"@cite_10",
"@cite_7"
],
"mid": [
"",
"2074930186",
"2012118336",
"2027855569"
],
"abstract": [
"",
"Using a sequence of topic trees to organize documents is a popular way to represent hierarchical and evolving topics in text corpora. However, following evolving topics in the context of topic trees remains difficult for users. To address this issue, we present an interactive visual text analysis approach to allow users to progressively explore and analyze the complex evolutionary patterns of hierarchical topics. The key idea behind our approach is to exploit a tree cut to approximate each tree and allow users to interactively modify the tree cuts based on their interests. In particular, we propose an incremental evolutionary tree cut algorithm with the goal of balancing 1) the fitness of each tree cut and the smoothness between adjacent tree cuts; 2) the historical and new information related to user interests. A time-based visualization is designed to illustrate the evolving topics over time. To preserve the mental map, we develop a stable layout algorithm. As a result, our approach can quickly guide users to progressively gain profound insights into evolving hierarchical topics. We evaluate the effectiveness of the proposed method on Amazon's Mechanical Turk and real-world news data. The results show that users are able to successfully analyze evolving topics in text data.",
"Interactive visualization provides valuable support for exploring, analyzing, and understanding textual documents. Certain tasks, however, require that insights derived from visual abstractions are verified by a human expert perusing the source text. So far, this problem is typically solved by offering overview-detail techniques, which present different views with different levels of abstractions. This often leads to problems with visual continuity. Focus-context techniques, on the other hand, succeed in accentuating interesting subsections of large text documents but are normally not suited for integrating visual abstractions. With VarifocalReader we present a technique that helps to solve some of these approaches' problems by combining characteristics from both. In particular, our method simplifies working with large and potentially complex text documents by simultaneously offering abstract representations of varying detail, based on the inherent structure of the document, and access to the text itself. In addition, VarifocalReader supports intra-document exploration through advanced navigation concepts and facilitates visual analysis tasks. The approach enables users to apply machine learning techniques and search mechanisms as well as to assess and adapt these techniques. This helps to extract entities, concepts and other artifacts from texts. In combination with the automatic generation of intermediate text levels through topic segmentation for thematic orientation, users can test hypotheses or develop interesting new research questions. To illustrate the advantages of our approach, we provide usage examples from literature studies.",
"For an investigative journalist, a large collection of documents obtained from a Freedom of Information Act request or a leak is both a blessing and a curse: such material may contain multiple newsworthy stories, but it can be difficult and time consuming to find relevant documents. Standard text search is useful, but even if the search target is known it may not be possible to formulate an effective query. In addition, summarization is an important non-search task. We present Overview , an application for the systematic analysis of large document collections based on document clustering, visualization, and tagging. This work contributes to the small set of design studies which evaluate a visualization system “in the wild”, and we report on six case studies where Overview was voluntarily used by self-initiated journalists to produce published stories. We find that the frequently-used language of “exploring” a document collection is both too vague and too narrow to capture how journalists actually used our application. Our iterative process, including multiple rounds of deployment and observations of real world usage, led to a much more specific characterization of tasks. We analyze and justify the visual encoding and interaction techniques used in Overview 's design with respect to our final task abstractions, and propose generalizable lessons for visualization design methodology."
]
} |
1507.06593 | 2201680753 | We present LDAExplore, a tool to visualize topic distributions in a given document corpus that are generated using Topic Modeling methods. Latent Dirichlet Allocation (LDA) is one of the basic methods that is predominantly used to generate topics. One of the problems with methods like LDA is that users who apply them may not understand the topics that are generated. Also, users may find it difficult to search correlated topics and correlated documents. LDAExplore, tries to alleviate these problems by visualizing topic and word distributions generated from the document corpus and allowing the user to interact with them. The system is designed for users, who have minimal knowledge of LDA or Topic Modelling methods. To evaluate our design, we run a pilot study which uses the abstracts of 322 Information Visualization papers, where every abstract is considered a document. The topics generated are then explored by users. The results show that users are able to find correlated documents and group them based on topics that are similar. | UTOPIAN @cite_15 , uses a force-directed graph to represent topics and is a semi-supervised system. It uses non-negative matrix factorization. Some other visualization include @cite_5 and @cite_3 . Another parallel coordinate design which works with words in the topic directly is @cite_2 . Both have a design which uses parallel coordinates and uses a force directed graph approach. Topic-based, interactive visual analysis tool (TIARA) @cite_17 shows topic distributions across documents across time. Force-directed graphs are advantageous in terms of easy understanding & usability, but they consume a lot of visual screen space and may not scale for large number of terms or documents. | {
"cite_N": [
"@cite_3",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_17"
],
"mid": [
"2024783975",
"2159158288",
"1991676464",
"2087382273",
"2092628313"
],
"abstract": [
"Scalable and effective analysis of large text corpora remains a challenging problem as our ability to collect textual data continues to increase at an exponential rate. To help users make sense of large text corpora, we present a novel visual analytics system, Parallel-Topics, which integrates a state-of-the-art probabilistic topic model Latent Dirichlet Allocation (LDA) with interactive visualization. To describe a corpus of documents, ParallelTopics first extracts a set of semantically meaningful topics using LDA. Unlike most traditional clustering techniques in which a document is assigned to a specific cluster, the LDA model accounts for different topical aspects of each individual document. This permits effective full text analysis of larger documents that may contain multiple topics. To highlight this property of the model, ParallelTopics utilizes the parallel coordinate metaphor to present the probabilistic distribution of a document across topics. Such representation allows the users to discover single-topic vs. multi-topic documents and the relative importance of each topic to a document of interest. In addition, since most text corpora are inherently temporal, ParallelTopics also depicts the topic evolution over time. We have applied ParallelTopics to exploring and analyzing several text corpora, including the scientific proposals awarded by the National Science Foundation and the publications in the VAST community over the years. To demonstrate the efficacy of ParallelTopics, we conducted several expert evaluations, the results of which are reported in this paper.",
"We present ThemeDelta, a visual analytics system for extracting and visualizing temporal trends, clustering, and reorganization in time-indexed textual datasets. ThemeDelta is supported by a dynamic temporal segmentation algorithm that integrates with topic modeling algorithms to identify change points where significant shifts in topics occur. This algorithm detects not only the clustering and associations of keywords in a time period, but also their convergence into topics (groups of keywords) that may later diverge into new groups. The visual representation of ThemeDelta uses sinuous, variable-width lines to show this evolution on a timeline, utilizing color for categories, and line width for keyword strength. We demonstrate how interaction with ThemeDelta helps capture the rise and fall of topics by analyzing archives of historical newspapers, of U.S. presidential campaign speeches, and of social messages collected through iNeighbors, a web-based social website. ThemeDelta is evaluated using a qualitative expert user study involving three researchers from rhetoric and history using the historical newspapers corpus.",
"Clustering plays an important role in many large-scale data analyses providing users with an overall understanding of their data. Nonetheless, clustering is not an easy task due to noisy features and outliers existing in the data, and thus the clustering results obtained from automatic algorithms often do not make clear sense. To remedy this problem, automatic clustering should be complemented with interactive visualization strategies. This paper proposes an interactive visual analytics system for document clustering, called iVisClustering, based on a widely-used topic modeling method, latent Dirichlet allocation (LDA). iVisClustering provides a summary of each cluster in terms of its most representative keywords and visualizes soft clustering results in parallel coordinates. The main view of the system provides a 2D plot that visualizes cluster similarities and the relation among data items with a graph-based representation. iVisClustering provides several other views, which contain useful interaction methods. With help of these visualization modules, we can interactively refine the clustering results in various ways. Keywords can be adjusted so that they characterize each cluster better. In addition, our system can filter out noisy data and re-cluster the data accordingly. Cluster hierarchy can be constructed using a tree structure and for this purpose, the system supports cluster-level interactions such as sub-clustering, removing unimportant clusters, merging the clusters that have similar meanings, and moving certain clusters to any other node in the tree structure. Furthermore, the system provides document-level interactions such as moving mis-clustered documents to another cluster and removing useless documents. Finally, we present how interactive clustering is performed via iVisClustering by using real-world document data sets. © 2012 Wiley Periodicals, Inc.",
"Topic modeling has been widely used for analyzing text document collections. Recently, there have been significant advancements in various topic modeling techniques, particularly in the form of probabilistic graphical modeling. State-of-the-art techniques such as Latent Dirichlet Allocation (LDA) have been successfully applied in visual text analytics. However, most of the widely-used methods based on probabilistic modeling have drawbacks in terms of consistency from multiple runs and empirical convergence. Furthermore, due to the complicatedness in the formulation and the algorithm, LDA cannot easily incorporate various types of user feedback. To tackle this problem, we propose a reliable and flexible visual analytics system for topic modeling called UTOPIAN (User-driven Topic modeling based on Interactive Nonnegative Matrix Factorization). Centered around its semi-supervised formulation, UTOPIAN enables users to interact with the topic modeling method and steer the result in a user-driven manner. We demonstrate the capability of UTOPIAN via several usage scenarios with real-world document corpuses such as InfoVis VAST paper data set and product review data sets.",
"We are building a topic-based, interactive visual analytic tool that aids users in analyzing large collections of text. To help users quickly discover content evolution and significant content transitions within a topic over time, here we present a novel, constraint-based approach to temporal topic segmentation. Our solution splits a discovered topic into multiple linear, non-overlapping sub-topics along a timeline by satisfying a diverse set of semantic, temporal, and visualization constraints simultaneously. For each derived sub-topic, our solution also automatically selects a set of representative keywords to summarize the main content of the sub-topic. Our extensive evaluation, including a crowd-sourced user study, demonstrates the effectiveness of our method over an existing baseline."
]
} |
1507.06778 | 2207824908 | There is a growing need for abstractions in logic specification languages such as FO(.) and ASP. One technique to achieve these abstractions are templates (sometimes called macros). While the semantics of templates are virtually always described through a syntactical rewriting scheme, we present an alternative view on templates as second order definitions. To extend the existing definition construct of FO(.) to second order, we introduce a powerful compositional framework for defining logics by modular integration of logic constructs specified as pairs of one syntactical and one semantical inductive rule. We use the framework to build a logic of nested second order definitions suitable to express templates. We show that under suitable restrictions, the view of templates as macros is semantically correct and that adding them does not extend the descriptive complexity of the base logic, which is in line with results of existing approaches. | Abstraction techniques have been an important area of research since the dawn of programming @cite_14 . Popular programming languages such as C++ consider templates as a keystone for abstractions @cite_7 . Within the ASP community, work by @cite_21 and @cite_9 introduced concepts to support composability, called templates and macros respectively. The key idea is to abstract away common constructs through the definition of generic template' predicates. These templates can then be resolved using a rewriting algorithm. | {
"cite_N": [
"@cite_9",
"@cite_14",
"@cite_21",
"@cite_7"
],
"mid": [
"1596557244",
"2055900537",
"155155990",
"1589951597"
],
"abstract": [
"Currently, most knowledge representation using logic programming with answer set semantics (AnsProlog) is ‘flat’. In this paper we elaborate on our thoughts about a modular structure for knowledge representation and declarative problem solving formalism using AnsProlog. We present language constructs that allow defining of modules and calling of such modules from programs. This allows one to write large knowledge bases or declarative problem solving programs by reusing existing modules instead of writing everything from scratch. We report on an implementation that allows such constructs. Our ultimate aim is to facilitate the creation and use of a repository of modules that can be used by knowledge engineers without having to re-implement basic knowledge representation concepts from scratch.",
"Modern Programming languages depend on abstraction: they manage complexity by emphazing what is significant to the user and suppressing what is not.",
"The work aims at extending Answer Set Programming (ASP) with the possibility of quickly introducing new predefined constructs and to deal with compound data structures: we show how ASP can be extended with ‘template’ predicate’s definitions. We present language syntax and give its operational semantics. We show that the theory supporting our ASP extension is sound, and that program encodings are evaluated as efficiently as ASP programs. Examples show how the extended language increases declarativity, readability, compactness of program encodings and code reusability..",
"\"The second edition is clearer and adds more examples on how to use STL in a practical environment. Moreover, it is more concerned with performance and tools for its measurement. Both changes are very welcome.\"--Lawrence Rauchwerger, Texas A&M University \"So many algorithms, so little time! The generic algorithms chapter with so many more examples than in the previous edition is delightful! The examples work cumulatively to give a sense of comfortable competence with the algorithms, containers, and iterators used.\"--Max A. Lebow, Software Engineer, Unisys Corporation The STL Tutorial and Reference Guide is highly acclaimed as the most accessible, comprehensive, and practical introduction to the Standard Template Library (STL). Encompassing a set of C++ generic data structures and algorithms, STL provides reusable, interchangeable components adaptable to many different uses without sacrificing efficiency. Written by authors who have been instrumental in the creation and practical application of STL, STL Tutorial and Reference Guide, Second Edition includes a tutorial, a thorough description of each element of the library, numerous sample applications, and a comprehensive reference. You will find in-depth explanations of iterators, generic algorithms, containers, function objects, and much more. Several larger, non-trivial applications demonstrate how to put STL's power and flexibility to work. This book will also show you how to integrate STL with object-oriented programming techniques. In addition, the comprehensive and detailed STL reference guide will be a constant and convenient companion as you learn to work with the library. This second edition is fully updated to reflect all of the changes made to STL for the final ANSI ISO C++ language standard. It has been expanded with new chapters and appendices. Many new code examples throughout the book illustrate individual concepts and techniques, while larger sample programs demonstrate the use of the STL in real-world C++ software development. An accompanying Web site, including source code and examples referenced in the text, can be found at http: www.cs.rpi.edu musser stl-book index.html."
]
} |
1507.06778 | 2207824908 | There is a growing need for abstractions in logic specification languages such as FO(.) and ASP. One technique to achieve these abstractions are templates (sometimes called macros). While the semantics of templates are virtually always described through a syntactical rewriting scheme, we present an alternative view on templates as second order definitions. To extend the existing definition construct of FO(.) to second order, we introduce a powerful compositional framework for defining logics by modular integration of logic constructs specified as pairs of one syntactical and one semantical inductive rule. We use the framework to build a logic of nested second order definitions suitable to express templates. We show that under suitable restrictions, the view of templates as macros is semantically correct and that adding them does not extend the descriptive complexity of the base logic, which is in line with results of existing approaches. | More formal attempts at introducing more abstractions in ASP were made. Dao- introduced modules which can be used in similar ways as templates @cite_19 but has the disadvantage that his template system introduces additional computational complexity, so the user has to be very careful when trying to write an efficient specification. | {
"cite_N": [
"@cite_19"
],
"mid": [
"1511906506"
],
"abstract": [
"Recently, enabling modularity aspects in Answer Set Programming (ASP) has gained increasing interest to ease the composition of program parts to an overall program. In this paper, we focus on modular nonmonotonic logic programs (MLP) under the answer set semantics, whose modules may have contextually dependent input provided by other modules. Moreover, (mutually) recursive module calls are allowed. We define a model-theoretic semantics for this extended setting, show that many desired properties of ordinary logic programming generalize to our modular ASP, and determine the computational complexity of the new formalism. We investigate the relationship of modular programs to disjunctive logic programs with well-defined input output interface (DLP-functions) and show that they can be embedded into MLPs."
]
} |
1507.06778 | 2207824908 | There is a growing need for abstractions in logic specification languages such as FO(.) and ASP. One technique to achieve these abstractions are templates (sometimes called macros). While the semantics of templates are virtually always described through a syntactical rewriting scheme, we present an alternative view on templates as second order definitions. To extend the existing definition construct of FO(.) to second order, we introduce a powerful compositional framework for defining logics by modular integration of logic constructs specified as pairs of one syntactical and one semantical inductive rule. We use the framework to build a logic of nested second order definitions suitable to express templates. We show that under suitable restrictions, the view of templates as macros is semantically correct and that adding them does not extend the descriptive complexity of the base logic, which is in line with results of existing approaches. | Previously, meta-programming @cite_10 has also been used to introduce abstractions, for example in systems such as @cite_1 . One of s most notable features is that it combines a higher-order syntax with a first-order semantics. s main motivation for this is to introduce a useful degree of second order yet remain decidable. While decidability is undeniably an interesting property, the problem of decidability already arises in logic programs under well-founded or stable semantics, certainly with the inclusion of inductive definitions: the issue of undecidability is not inherent to the addition of template behavior. As a result, in recent times deduction inference has been replaced by various other, more practical inference methods such as model checking, model expansion, or querying. Furthermore, for practical applications, we impose the restriction of stratified templates for which an equivalent first-order semantics exists. | {
"cite_N": [
"@cite_1",
"@cite_10"
],
"mid": [
"1979966822",
"1522278016"
],
"abstract": [
"Abstract We describe a novel logic, called HiLog, and show that it provides a more suitable basis for logic programming than does traditional predicate logic. HiLog has a higher-order syntax and allows arbitrary terms to appear in places where predicates, functions, and atomic formulas occur in predicate calculus. But its semantics is first-order and admits a sound and complete proof procedure. Applications of HiLog are discussed, including DCG grammars, higher-order and modular logic programming, and deductive databases.",
"Meta-programs, which treat other computer programs as data, include compilers, editors, simulators, debuggers, and program transformers. Because of the wide ranging applications, meta-programming has become a subject of considerable practical and theoretical interest. This book provides the first comprehensive view of topics in the theory and application of meta-programming, covering problems of representation and of soundness and correctness of interpreters, analysis and evaluation of meta-logic programs, and applications to sophisticated knowledge-based systems.Harvey Abramson is Reader in Computer Science at the University of Bristol, England; M. H. Rogers is Professor of Computer Science, also at the University of Bristol. Meta-Programming in Logic Programming is in the series Logic Programming Research Reports and Notes, edited by Ehud Shapiro."
]
} |
1507.06778 | 2207824908 | There is a growing need for abstractions in logic specification languages such as FO(.) and ASP. One technique to achieve these abstractions are templates (sometimes called macros). While the semantics of templates are virtually always described through a syntactical rewriting scheme, we present an alternative view on templates as second order definitions. To extend the existing definition construct of FO(.) to second order, we introduce a powerful compositional framework for defining logics by modular integration of logic constructs specified as pairs of one syntactical and one semantical inductive rule. We use the framework to build a logic of nested second order definitions suitable to express templates. We show that under suitable restrictions, the view of templates as macros is semantically correct and that adding them does not extend the descriptive complexity of the base logic, which is in line with results of existing approaches. | An alternative approach is to see a template instance as a call to another theory, using another solver as an oracle. An implementation of this approach exists in HEX @cite_20 . This implementation however suffers from the fact that the different calls occur in different processes. As a consequence, not enough information is shared which hurts the search. This is analog to the approach presented in @cite_12 , where a general approach to modules is presented. A template would be an instance of a module in this framework, however the associated algebra lacks the possibility to quantify over modules. | {
"cite_N": [
"@cite_12",
"@cite_20"
],
"mid": [
"186029108",
"107062135"
],
"abstract": [
"Motivated by the need to combine systems and logics, we develop a modular approach to the model expansion (MX) problem, a task which is common in applications such as planning, scheduling, computational biology, formal verification. We develop a modular framework where parts of a modular system can be written in different languages. We start our development from a previous work, [14], but modify and extend that framework significantly. In particular, we use a model-theoretic setting and introduce a feedback (loop) operator on modules. We study the expressive power of our framework and demonstrate that adding the feedback operator increases the expressive power considerably. We prove that, even with individual modules being polytime solvable, the framework is expressive enough to capture all of NP, a property which does not hold without loop. Moreover, we demonstrate that, using monotonicity and anti-monotonicity of modules, one can significantly reduce the search space of a solution to a modular system.",
"Answer-Set Programming (ASP) is an established declarative programming paradigm. However, classical ASP lacks subprogram calls as in procedural programming, and access to external computations (akin to remote procedure calls) in general. This feature is desired for increasing modularity and—assuming proper access in place—(meta-)reasoning over subprogram results. While hex-programs extend classical ASP with external source access, they do not support calls of (sub-)programs upfront. We present nested hex -programs, which extend hex-programs to serve the desired feature in a user-friendly manner. Notably, the answer sets of called sub-programs can be individually accessed. This is particularly useful for applications that need to reason over answer sets like belief set merging, user-defined aggregate functions, or preferences of answer sets. We will further present a novel method for rapid prototyping of external sources by the use of nested programs."
]
} |
1507.06778 | 2207824908 | There is a growing need for abstractions in logic specification languages such as FO(.) and ASP. One technique to achieve these abstractions are templates (sometimes called macros). While the semantics of templates are virtually always described through a syntactical rewriting scheme, we present an alternative view on templates as second order definitions. To extend the existing definition construct of FO(.) to second order, we introduce a powerful compositional framework for defining logics by modular integration of logic constructs specified as pairs of one syntactical and one semantical inductive rule. We use the framework to build a logic of nested second order definitions suitable to express templates. We show that under suitable restrictions, the view of templates as macros is semantically correct and that adding them does not extend the descriptive complexity of the base logic, which is in line with results of existing approaches. | Previous efforts where made to generalize common language concepts, such as the work by Lifschitz @cite_6 who extended logic programs to allow arbitrary nesting of conjunction @math , disjunction @math and negation as failure in rule bodies. The nesting in this paper is of very different kind, by allowing the full logic, including definitions itself, in the body. | {
"cite_N": [
"@cite_6"
],
"mid": [
"1511336005"
],
"abstract": [
"In “answer set programming”[5,7] solutions to a problem are represented by answer sets (known also as stable models), and not by answer substitutions produced in response to a query, as in conventional logic programming. Instead of Prolog, answer set programming uses software systems capable of computing answer sets. Four such systems were demonstrated at the Workshop on Logic- Based AI held in June of 1999 in Washington, DC: dlv1, smodels,2, DeReS3 and ccalc4."
]
} |
1507.05946 | 1794839224 | We present Buzz, a novel programming language for heterogeneous robot swarms. Buzz advocates a compositional approach, offering primitives to define swarm behaviors both from the perspective of the single robot and of the overall swarm. Single-robot primitives include robot-specific instructions and manipulation of neighborhood data. Swarm-based primitives allow for the dynamic management of robot teams, and for sharing information globally across the swarm. Self-organization stems from the completely decentralized mechanisms upon which the Buzz run-time platform is based. The language can be extended to add new primitives (thus supporting heterogeneous robot swarms), and its run-time platform is designed to be laid on top of other frameworks, such as Robot Operating System. We showcase the capabilities of Buzz by providing code examples, and analyze scalability and robustness of the run-time platform through realistic simulated experiments with representative swarm algorithms. | The last decade saw the introduction of the first top-down approaches to the development of distributed computing systems. Various abstractions and programming languages have been proposed in the sensor network community @cite_17 . A programming methodology inspired by embryogenesis and designed for self-assembly applications was proposed in @cite_16 . Dantu proposed Karma @cite_13 , a framework that combines centralized and distributed elements to perform task allocation in a swarm of aerial robots unable to communicate directly. | {
"cite_N": [
"@cite_16",
"@cite_13",
"@cite_17"
],
"mid": [
"",
"2143705673",
"2117176859"
],
"abstract": [
"",
"Research in micro-aerial vehicle (MAV) construction, control, and high-density power sources is enabling swarms of MAVs as a new class of mobile sensing systems. For efficient operation, such systems must adapt to dynamic environments, cope with uncertainty in sensing and control, and operate with limited resources. We propose a novel system architecture based on a hive-drone model that simplifies the functionality of an individual MAV to a sequence of sensing and actuation commands with no in-field communication. This decision simplifies the hardware and software complexity of individual MAVs and moves the complexity of coordination entirely to a central hive computer. We present Karma, a system for programming and managing MAV swarms. Through simulation and testbed experiments we demonstrate how applications in Karma can run on limited resources, are robust to individual MAV failure, and adapt to changes in the environment.",
"Wireless sensor networks (WSNs) are attracting great interest in a number of application domains concerned with monitoring and control of physical phenomena, as they enable dense and untethered deployments at low cost and with unprecedented flexibility. However, application development is still one of the main hurdles to a wide adoption of WSN technology. In current real-world WSN deployments, programming is typically carried out very close to the operating system, therefore requiring the programmer to focus on low-level system issues. This not only distracts the programmer from the application logic, but also requires a technical background rarely found among application domain experts. The need for appropriate high-level programming abstractions, capable of simplifying the programming chore without sacrificing efficiency, has long been recognized, and several solutions have hitherto been proposed, which differ along many dimensions. In this article, we survey the state of the art in programming approaches for WSNs. We begin by presenting a taxonomy of WSN applications, to identify the fundamental requirements programming platforms must deal with. Then, we introduce a taxonomy of WSN programming approaches that captures the fundamental differences among existing solutions, and constitutes the core contribution of this article. Our presentation style relies on concrete examples and code snippets taken from programming platforms representative of the taxonomy dimensions being discussed. We use the taxonomy to provide an exhaustive classification of existing approaches. Moreover, we also map existing approaches back to the application requirements, therefore providing not only a complete view of the state of the art, but also useful insights for selecting the programming abstraction most appropriate to the application at hand."
]
} |
1507.05946 | 1794839224 | We present Buzz, a novel programming language for heterogeneous robot swarms. Buzz advocates a compositional approach, offering primitives to define swarm behaviors both from the perspective of the single robot and of the overall swarm. Single-robot primitives include robot-specific instructions and manipulation of neighborhood data. Swarm-based primitives allow for the dynamic management of robot teams, and for sharing information globally across the swarm. Self-organization stems from the completely decentralized mechanisms upon which the Buzz run-time platform is based. The language can be extended to add new primitives (thus supporting heterogeneous robot swarms), and its run-time platform is designed to be laid on top of other frameworks, such as Robot Operating System. We showcase the capabilities of Buzz by providing code examples, and analyze scalability and robustness of the run-time platform through realistic simulated experiments with representative swarm algorithms. | Proto @cite_15 is a language designed for spatial computers,'' that is, a collection of connected computing devices scattered in a physical space. The spatial computer is modeled as a continuous medium in which each point is assigned a tuple of values. The primitive operations of Proto act on this medium. The LISP-like syntax of Proto is modular by design and produces predictable programs. Proto shines in scenarios in which homogeneous devices perform distributed spatial computation---the inspiration for Buzz' neighbors construct was taken from Proto. However, as a language for robotics, Proto presents a number of limitations: Part of these issues have been addressed in @cite_27 . | {
"cite_N": [
"@cite_27",
"@cite_15"
],
"mid": [
"1561324106",
"2098914525"
],
"abstract": [
"We present a vision of distributed system coordination as a set of activities affecting the space-time fabric of interaction events. In the tuple space setting that we consider, coordination amounts to control of the spatial and temporal configuration of tuples spread across the network, which in turn drives the behaviour of situated agents. We therefore draw on prior work in spatial computing and distributed systems coordination, to define a new coordination language that adds to the basic Linda primitives a small set of space-time constructs for linking coordination processes with their environment. We show how this framework supports the global-level emergence of adaptive coordination policies, applying it to two example cases: crowd steering in a pervasive computing scenario and a gradient-based implementation of Linda primitives for mobile ad-hoc networks.",
"Programmability is an increasingly important barrier to the deployment of multi-robot systems, as no prior approach allows routine composition and reuse of general aggregate behaviors. The Proto spatial computing language, however, already provides this sort of aggregate behavior programming for non-mobile systems using an abstraction of the network as a continuous-space-filling device. We extend this abstraction to mobile systems and show that Proto can be applied to multi-robot systems with an actuator that turns a vector field into device motion. Proto programs operate on fields of values over an abstract device called the amorphous medium and can be joined together using functional composition. These programs are then automatically transformed for execution by individual devices, producing an approximation of the specified continuous-space behavior. We are thus able to build up a library of simple swarm behaviors, and to compose them together into highly succinct programs that predictably produce the desired complex swarm behaviors, as demonstrated in simulation and on a group of 40 iRobot SwarmBots."
]
} |
1507.05946 | 1794839224 | We present Buzz, a novel programming language for heterogeneous robot swarms. Buzz advocates a compositional approach, offering primitives to define swarm behaviors both from the perspective of the single robot and of the overall swarm. Single-robot primitives include robot-specific instructions and manipulation of neighborhood data. Swarm-based primitives allow for the dynamic management of robot teams, and for sharing information globally across the swarm. Self-organization stems from the completely decentralized mechanisms upon which the Buzz run-time platform is based. The language can be extended to add new primitives (thus supporting heterogeneous robot swarms), and its run-time platform is designed to be laid on top of other frameworks, such as Robot Operating System. We showcase the capabilities of Buzz by providing code examples, and analyze scalability and robustness of the run-time platform through realistic simulated experiments with representative swarm algorithms. | Meld @cite_10 is a declarative language that realizes the top-down approach by allowing the developer to specify a high-level, logic description of what the swarm as a whole should achieve. The low-level (communication coordination) mechanisms that reify the high-level goals, i.e., the how, are left to the language implementation and are transparent to the developer. The main concepts of the language are facts and rules. A fact encodes a piece of information that the system considers true at a given time. A computation in Meld consists of applying the specified rules progressively to produce all the true facts, until no further production is possible. Meld supports heterogeneous robot swarms by endowing each robot with facts that map to specific capabilities. A similar concept exists in Buzz, with robot-specific symbols (see sec:implcompilation ). The main limitation of Meld for swarm robotics is the fact that its rule-based mechanics produce programs whose execution is difficult to predict and debug, and it is thus impossible to decompose complex programs into well-defined modules. | {
"cite_N": [
"@cite_10"
],
"mid": [
"1898441578"
],
"abstract": [
"We address how to write programs for distributed computing systems in which the network topology can change dynamically. Examples of such systems, which we call ensembles , include programmable sensor networks (where the network topology can change due to failures in the nodes or links) and modular robotics systems (whose physical configuration can be rearranged under program control). We extend Meld [1], a logic programming language that allows an ensemble to be viewed as a single computing system. In addition to proving some key properties of the language, we have also implemented a complete compiler for Meld. It generates code for TinyOS [14] and for a Claytronics simulator [12]. We have successfully written correct, efficient, and complex programs for ensembles containing over one million nodes."
]
} |
1507.05946 | 1794839224 | We present Buzz, a novel programming language for heterogeneous robot swarms. Buzz advocates a compositional approach, offering primitives to define swarm behaviors both from the perspective of the single robot and of the overall swarm. Single-robot primitives include robot-specific instructions and manipulation of neighborhood data. Swarm-based primitives allow for the dynamic management of robot teams, and for sharing information globally across the swarm. Self-organization stems from the completely decentralized mechanisms upon which the Buzz run-time platform is based. The language can be extended to add new primitives (thus supporting heterogeneous robot swarms), and its run-time platform is designed to be laid on top of other frameworks, such as Robot Operating System. We showcase the capabilities of Buzz by providing code examples, and analyze scalability and robustness of the run-time platform through realistic simulated experiments with representative swarm algorithms. | Voltron @cite_28 is a language designed for distributed mobile sensing. Voltron allows the developer to specify the logic to be executed at several locations, without having to dictate how the robots must coordinate to achieve the objectives. Coordination is achieved automatically through the use of a shared tuple space, for which two implementations were tested---a centralized one, and a decentralized one based on the concept of virtual synchrony @cite_33 . In Buzz, virtual stigmergy was loosely inspired by the capabilities of virtual synchrony, although the internals of the two systems differ substantially. Voltron excels in single-robot, single-task scenarios in which pure sensing is involved; however, fine-grained coordination of heterogeneous swarms is not possible, because Voltron's abstraction hides the low-level details of the robots. | {
"cite_N": [
"@cite_28",
"@cite_33"
],
"mid": [
"2142588308",
"2131929623"
],
"abstract": [
"Autonomous drones are a powerful new breed of mobile sensing platform that can greatly extend the capabilities of traditional sensing systems. Unfortunately, it is still non-trivial to coordinate multiple drones to perform a task collaboratively. We present a novel programming model called team-level programming that can express collaborative sensing tasks without exposing the complexity of managing multiple drones, such as concurrent programming, parallel execution, scaling, and failure recovering. We create the Voltron programming system to explore the concept of team-level programming in active sensing applications. Voltron offers programming constructs to create the illusion of a simple sequential execution model while still maximizing opportunities to dynamically re-task the drones as needed. We implement Voltron by targeting a popular aerial drone platform, and evaluate the resulting system using a combination of real deployments, user studies, and emulation. Our results indicate that Voltron enables simpler code and produces marginal overhead in terms of CPU, memory, and network utilization. In addition, it greatly facilitates implementing correct and complete collaborative drone applications, compared to existing drone programming systems.",
"We describe applications of a virtually synchronous environment for distributed programming, which underlies a collection of distributed programming tools in the ISIS 2 system. A virtually synchronous environment allows processes to be structured into process groups , and makes events like broadcasts to the group as an entity, group membership changes, and even migration of an activity from one place to another appear to occur instantaneously — in other words, synchronously. A major advantage to this approach is that many aspects of a distributed application can be treated independently without compromising correctness. Moreover, user code that is designed as if the system were synchronous can often be executed concurrently. We argue that this approach to building distributed and fault-tolerant software is more straightforward, more flexible, and more likely to yield correct solutions than alternative approaches."
]
} |
1507.06188 | 2284550951 | Wireless sensor networks (WSNs) operating in the license-free spectrum suffer from uncontrolled interference as those spectrum bands become increasingly crowded. The emerging cognitive radio sensor networks (CRSNs) provide a promising solution to address this challenge by enabling sensor nodes to opportunistically access licensed channels. However, since sensor nodes have to consume considerable energy to support CR functionalities, such as channel sensing and switching, the opportunistic channel accessing should be carefully devised for improving the energy efficiency in CRSN. To this end, we investigate the dynamic channel accessing problem to improve the energy efficiency for a clustered CRSN. Under the primary users' protection requirement, we study the resource allocation issues to maximize the energy efficiency of utilizing a licensed channel for intra-cluster and inter-cluster data transmission, respectively. With the consideration of the energy consumption in channel sensing and switching, we further determine the condition when sensor nodes should sense and switch to a licensed channel for improving the energy efficiency, according to the packet loss rate of the license-free channel. In addition, two dynamic channel accessing schemes are proposed to identify the channel sensing and switching sequences for intra-cluster and inter-cluster data transmission, respectively. Extensive simulation results demonstrate that the proposed channel accessing schemes can significantly reduce the energy consumption in CRSNs. | @cite_1 analyze the delay performance to support real-time traffic in CRSNs. They derive the average packet transmission delay for two types of channel switching mechanisms, namely periodic switching and triggered switching, under two kinds of real-time traffic, including periodic data traffic and Poisson traffic, respectively. @cite_29 provide several principles for delay-sensitive multimedia communication in CRSNs through extensive simulations. A greedy networking algorithm is proposed in @cite_15 to enhance the end-to-end delay and network throughput for CRSNs, by leveraging distributed source coding and broadcasting. Since the QoS performances of sensor networks can be significantly impacted by routing schemes, research efforts are also devoted in developing dynamic routing for CRSNs @cite_24 @cite_17 . Quang and Kim @cite_24 propose a throughput-aware routing algorithm to improve network throughput and decrease end-to-end delay for a large-scale clustered CRSN based on ISA100.11a. In addition, opportunistic medium access (MAC) protocol design and performance analysis of existing MAC protocols for CRSNs are studied in @cite_25 @cite_23 . | {
"cite_N": [
"@cite_29",
"@cite_1",
"@cite_24",
"@cite_23",
"@cite_15",
"@cite_25",
"@cite_17"
],
"mid": [
"2132418756",
"2165578915",
"2089523914",
"1968242001",
"",
"2102618040",
"2157426660"
],
"abstract": [
"Multimedia and delay-sensitive data applications in cognitive radio sensor networks (CRSN) require efficient real-time communication and dynamic spectrum access (DSA) capabilities. This requirement poses emerging problems to be addressed in inherently resource-constrained sensor networks, and needs investigation of CRSN challenges with real-time communication requirements. In this paper, the main design challenges and principles for multimedia and delay-sensitive data transport in CRSN are introduced. The existing transport protocols and algorithms devised for cognitive radio ad hoc networks and wireless sensor networks (WSN) are explored from the perspective of CRSN paradigm. Specifically, the challenges for real-time transport in CRSN are investigated in different spectrum environments of smart grid, e.g., 500kV substation, main power room and underground network transformer vaults. Open research issues for the realization of energy-efficient and real-time transport in CRSN are also presented. Overall, the performance evaluations provide valuable insights about real-time transport in CRSN and guide design decisions and trade-offs for CRSN applications in smart electric power grid.",
"Traditional wireless sensor networks (WSNs) working in the license-free spectrum suffer from uncontrolled interference as the license-free spectrum becomes increasingly crowded. Designing a WSN based on cognitive radio can be promising in the near future in order to provide data transmissions with quality of service requirements. In this paper we introduce a cognitive radio sensor network (CRSN) and analyze its performance for supporting real-time traffic. The network opportunistically accesses vacant channels in the licensed spectrum. When the current channel becomes unavailable, the devices can switch to another available channel. Two types of channel switchings are considered, in periodic switching (PS) the devices can switch to a new channel only at the beginning of each channel switching (CS) interval, while in triggered switching (TS) the devices can switch to a new channel as soon as the current channel is lost. We consider two types of real-time traffic, i) a burst of packets are generated periodically and the number of packets in each burst is random, and ii) packet arrivals follow a Poisson process. We derive the average packet transmission delay for each type of the traffic and channel switching mechanisms. Our results indicate that real-time traffic can be effectively supported in the CRSN with small average packet transmission delay. For the network using PS, packets with the Poisson arrivals experience longer average delay than the bursty arrivals; while for the network using TS, packets with the bursty arrivals experience longer average delay.",
"This paper proposes a routing algorithm that enhances throughput and decreases end-to-end delay in industrial cognitive radio sensor networks (ICRSNs) based on ISA100.11a. In ICRSNs, the throughput is downgraded by interference from primary networks. The proposed routing algorithm is targeted at large-scale networks where data are forwarded through different clusters on their way to the sink. By estimating the maximum throughput for each path, the data can be forwarded through the most optimal path. Simulation results show that our scheme can enhance throughput and decrease end-to-end delay.",
"Spectrum sensing is an integral part of medium access control (MAC) in cognitive radio (CR) networks as its reliability determines the success of transmission. However, it is an energy-consuming operation that needs to be minimized for CR sensor networks (CRSNs) due to resource scarcity. In this paper, a cognitive adaptive MAC (CAMAC) protocol, which supports opportunistic transmission while addressing the issue of power limitation in CRSNs, is proposed. Energy conservation in CAMAC is achieved in three fronts: on-demand spectrum sensing, limiting the number of spectrum sensing nodes, and applying a duty cycle. Spectrum sensing is initiated on-demand when the nodes have data to transmit, and it also exploits a subset of spectrum sensing nodes to gather spectrum availability information for all the nodes. Furthermore, it defines an adaptive duty cycle for the CRSN nodes to periodically sleep and remains awake when data are available for transmission. Hence, CAMAC stands as an adaptive solution that employs the small number of spectrum sensing nodes with an adaptive sensing period yielding minimum energy consumption. Simulation results reveal the efficiency of CAMAC in terms of high throughput and less energy consumption, which is adaptive to primary users' traffic and duty cycle.",
"",
"Given the highly variable physical layer characteristics in cognitive radio sensor networks (CRSN), it is indispensable to provide the performance analysis for cognitive radio users for smooth operations of the higher layer protocols. Taking into account the dynamic spectrum access, this paper formulates the two fundamental performance metrics in CRSN; bandwidth and delay. The performance is analyzed for a CSMA-based medium access control protocol that uses a common control channel for secondary users (SUs) to negotiate the wideband data traffic channel. The two performance metrics are derived based on the fact that SUs can exploit the cognitive radio to simultaneously access distinct traffic channels in the common interference region. This feature has not been exploited in previous studies in estimating the achievable throughput and delay in cognitive radio networks. Performance analysis reveals that dedicating a common control channel for SUs enhances their aggregated bandwidth approximately five times through the possibility of concurrent transmissions on different traffic channels and reduces the packet delay significantly.",
"Wireless sensor networks (WSNs) have been increasingly considered an attractive solution for a plethora of applications. The low cost of sensor nodes provides a mean to deploy large sensor arrays in a variety of applications, such as civilian and environmental monitoring. Most of the WSNs operate in unlicensed spectrum bands, which have become overcrowded. As the number of the nodes that join the network increases, the need for energy-efficient, resource-constrained, and spectrum-efficient protocol also increases. Incorporating cognitive radio capability in sensor networks yields a promising networking paradigm, also known as cognitive radio sensor networks. In this paper, a cognitive networking with opportunistic routing protocol for WSNs is introduced. The objective of the proposed protocol is to improve the network performance after increasing network scalability. The performance of the proposed protocol is evaluated through simulations. An accurate channel model is built to evaluate the signal strength in different areas of a complex indoor environment. Then, a discrete event simulator is applied to examine the performance of the proposed protocol in comparison with two other routing protocols. Simulation results show that when comparing with other common routing protocols, the proposed protocol performs better with respect to throughput, packet delay, and total energy consumption."
]
} |
1507.06352 | 2285381279 | Performance bounds are given for exploratory co-clustering blockmodeling of bipartite graph data, where we assume the rows and columns of the data matrix are samples from an arbitrary population. This is equivalent to assuming that the data is generated from a nonsmooth graphon. It is shown that co-clusters found by any method can be extended to the row and column populations, or equivalently that the estimated blockmodel approximates a blocked version of the generative graphon, with estimation error bounded by @math . Analogous performance bounds are also given for degree-corrected blockmodels and random dot product graphs, with error rates depending on the dimensionality of the latent variable space. | In @cite_30 and in this present paper, which consider only bipartite graphs, the emphasis is on exploratory analysis. Hence no assumptions are placed on the generative graphon. Unlike the works which assume smoothness or low rank structure, the object of inference is not the generative model itself, but rather a blocked version of it (this is defined precisely in Section ). This is reminiscent of some results for confidence intervals in nonparametric regression, in which the interval is centered not on the generative function or density itself, but rather on a smoothed or histogram-ed version [Sec 5.7 and Thm 6.20] wasserman2006all . The present paper can be viewed as a substantial improvement over @cite_30 ; for example, Theorem improves the rates of convergence from @math to @math , and also applies to computationally efficient estimates. | {
"cite_N": [
"@cite_30"
],
"mid": [
"2005455916"
],
"abstract": [
"This article establishes the performance of stochastic blockmodels in addressing the co-clustering problem of partitioning a binary array into subsets, assuming only that the data are generated by a nonparametric process satisfying the condition of separate exchangeability. We provide oracle inequalities with rate of convergence @math corresponding to profile likelihood maximization and mean-square error minimization, and show that the blockmodel can be interpreted in this setting as an optimal piecewise-constant approximation to the generative nonparametric model. We also show for large sample sizes that the detection of co-clusters in such data indicates with high probability the existence of co-clusters of equal size and asymptotically equivalent connectivity in the underlying generative process."
]
} |
1507.06240 | 2308861422 | A distance labeling scheme is an assignment of bit-labels to the vertices of an undirected, unweighted graph such that the distance between any pair of vertices can be decoded solely from their labels. We propose a series of new labeling schemes within the framework of so-called hub labeling (HL, also known as landmark labeling or 2-hop-cover labeling), in which each node @math stores its distance to all nodes from an appropriately chosen set of hubs @math . For a queried pair of nodes @math , the length of a shortest @math -path passing through a hub node from @math is then used as an upper bound on the distance between @math and @math . We present a hub labeling which allows us to decode exact distances in sparse graphs using labels of size sublinear in the number of nodes. For graphs with at most @math nodes and average degree @math , the tradeoff between label bit size @math and query decoding time @math for our approach is given by @math , for any @math . Our simple approach is thus the first sublinear-space distance labeling for sparse graphs that simultaneously admits small decoding time (for constant @math , we can achieve any @math while maintaining @math ), and it also provides an improvement in terms of label size with respect to previous slower approaches. By using similar techniques, we then present a @math -additive labeling scheme for general graphs, i.e., one in which the decoder provides a 2-additive-approximation of the distance between any pair of nodes. We achieve almost the same label size-time tradeoff @math , for any @math . To our knowledge, this is the first additive scheme with constant absolute error to use labels of sublinear size. The corresponding decoding time is then small (any @math is sufficient). | The distance labeling problem in undirected graphs was first investigated by Graham and Pollak @cite_13 , who provided the first labeling scheme with labels of size @math . The decoding time for labels of size @math was subsequently improved to @math by Gavoille al @cite_4 and to @math by Weimann and Peleg @cite_7 . Finally, Alstrup al @cite_1 present a scheme for general graphs with decoding in @math time using labels of size @math bits. For the sake of sanity of the notation, we define @math . This matches up to low order terms the space of the currently best known distance oracle with @math time and @math total space in a memory model, due to Nitto and Venturini @cite_24 . | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_1",
"@cite_24",
"@cite_13"
],
"mid": [
"2034501275",
"2074619214",
"2952655484",
"2118467196",
"140456478"
],
"abstract": [
"We consider the problem of labeling the nodes of a graph in a way that will allow one to compute the distance between any two nodes directly from their labels (without using any additional information). Our main interest is in the minimal length of labels needed in different cases. We obtain upper and lower bounds for several interesting families of graphs. In particular, our main results are the following. For general graphs, we show that the length needed is Θ(n). For trees, we show that the length needed is Θ(log2 n). For planar graphs, we show an upper bound of O(√nlogn) and a lower bound of Ω(n1 3). For bounded degree graphs, we show a lower bound of Ω(√n). The upper bounds for planar graphs and for trees follow by a more general upper bound for graphs with a r(n)-separator. The two lower bounds, however, are obtained by two different arguments that may be interesting in their own right. We also show some lower bounds on the length of the labels, even if it is only required that distances be approximated to a multiplicative factor s. For example, we show that for general graphs the required length is Ω(n) for every s < 3. We also consider the problem of the time complexity of the distance function once the labels are computed. We show that there are graphs with optimal labels of length 3 log n, such that if we use any labels with fewer than n bits per label, computing the distance function requires exponential time. A similar result is obtained for planar and bounded degree graphs.",
"We show that the vertices of an edge-weighted undirected graph can be labeled with labels of size O(n) such that the exact distance between any two vertices can be inferred from their labels alone in O(log^@?n) time. This improves the previous best exact distance labeling scheme that also requires O(n)-sized labels but O(loglogn) time to compute the distance. Our scheme is almost optimal as exact distance labeling is known to require labels of length @W(n).",
"We consider how to assign labels to any undirected graph with n nodes such that, given the labels of two nodes and no other information regarding the graph, it is possible to determine the distance between the two nodes. The challenge in such a distance labeling scheme is primarily to minimize the maximum label lenght and secondarily to minimize the time needed to answer distance queries (decoding). Previous schemes have offered different trade-offs between label lengths and query time. This paper presents a simple algorithm with shorter labels and shorter query time than any previous solution, thereby improving the state-of-the-art with respect to both label length and query time in one single algorithm. Our solution addresses several open problems concerning label length and decoding time and is the first improvement of label length for more than three decades. More specifically, we present a distance labeling scheme with label size (log 3) 2 + o(n) (logarithms are in base 2) and O(1) decoding time. This outperforms all existing results with respect to both size and decoding time, including Winkler's (Combinatorica 1983) decade-old result, which uses labels of size (log 3)n and O(n log n) decoding time, and (SODA'01), which uses labels of size 11n + o(n) and O(loglog n) decoding time. In addition, our algorithm is simpler than the previous ones. In the case of integral edge weights of size at most W, we present almost matching upper and lower bounds for label sizes. For r-additive approximation schemes, where distances can be off by an additive constant r, we give both upper and lower bounds. In particular, we present an upper bound for 1-additive approximation schemes which, in the unweighted case, has the same size (ignoring second order terms) as an adjacency scheme: n 2. We also give results for bipartite graphs and for exact and 1-additive distance oracles.",
"Let G be an unweighted and undirected graph of n nodes, and let D be the nxn matrix storing the All-Pairs-Shortest-Path Distances in G. Since D contains integers in [n]@?+ , its plain storage takes n^2log(n+1) bits. However, a simple counting argument shows that n^2 2 bits are necessary to store D. In this paper we investigate the question of finding a succinct representation of D that requires O(n^2) bits of storage and still supports constant-time access to each of its entries. This is asymptotically optimal in the worst case, and far from the information-theoretic lower bound by a multiplicative factor log\"23 1.585. As a result O(1) bits per pairs of nodes in G are enough to retain constant-time access to their shortest-path distance. We achieve this result by reducing the storage of D to the succinct storage of labeled trees and ternary sequences, for which we properly adapt and orchestrate the use of known compressed data structures. This approach can be easily and optimally extended to graphs whose edge weights are positive integers bounded by a constant value.",
"We shall refer to d((s 1 ..... Sn) , (s ..... s )) as the distance be! I tween the two n-tuples (s I ..... Sn) and (s 1 ..... s n) although, strictly speaking• this is an abuse of terminology since d does not satisfy the triangle inequality. For a connected graph G, the distance between two vertices v and v' in G, denoted by dG(V,V'), is defined to be the minimum number of edges in any path between v and v'. The following problem arose recently in connection with a data transmission scheme of J. R. Pierce [4]."
]
} |
1507.06240 | 2308861422 | A distance labeling scheme is an assignment of bit-labels to the vertices of an undirected, unweighted graph such that the distance between any pair of vertices can be decoded solely from their labels. We propose a series of new labeling schemes within the framework of so-called hub labeling (HL, also known as landmark labeling or 2-hop-cover labeling), in which each node @math stores its distance to all nodes from an appropriately chosen set of hubs @math . For a queried pair of nodes @math , the length of a shortest @math -path passing through a hub node from @math is then used as an upper bound on the distance between @math and @math . We present a hub labeling which allows us to decode exact distances in sparse graphs using labels of size sublinear in the number of nodes. For graphs with at most @math nodes and average degree @math , the tradeoff between label bit size @math and query decoding time @math for our approach is given by @math , for any @math . Our simple approach is thus the first sublinear-space distance labeling for sparse graphs that simultaneously admits small decoding time (for constant @math , we can achieve any @math while maintaining @math ), and it also provides an improvement in terms of label size with respect to previous slower approaches. By using similar techniques, we then present a @math -additive labeling scheme for general graphs, i.e., one in which the decoder provides a 2-additive-approximation of the distance between any pair of nodes. We achieve almost the same label size-time tradeoff @math , for any @math . To our knowledge, this is the first additive scheme with constant absolute error to use labels of sublinear size. The corresponding decoding time is then small (any @math is sufficient). | The notion of @math -preserving distance labeling, first introduced by Bollob 'as al @cite_2 , describes a labeling scheme correctly encoding every distance that is at least @math . @cite_2 presents such a @math -preserving scheme of size @math . This was recently improved by Alstrup al @cite_8 to a @math -preserving scheme of size @math . Together with an observation that all distances smaller than @math can be stored directly, this results in a labeling scheme of size @math , where @math . For sparse graphs, this is @math . | {
"cite_N": [
"@cite_8",
"@cite_2"
],
"mid": [
"756611012",
"2082352769"
],
"abstract": [
"A distance labeling scheme labels the n nodes of a graph with binary strings such that, given the labels of any two nodes, one can determine the distance in the graph between the two nodes by looking only at the labels. A D-preserving distance labeling scheme only returns precise distances between pairs of nodes that are at distance at least D from each other. In this paper we consider distance labeling schemes for the classical case of unweighted and undirected graphs. We present the first distance labeling scheme of size opnq for sparse graphs (and hence bounded degree graphs). This addresses an open problem by Gavoille et. al. [J. Algo. 2004], hereby separating the complexity from general graphs which require pnq size Moon [Proc. of Glasgow Math. Association 1965]. As an intermediate result we give a Op n D log",
"For an unweighted graph @math , @math is a subgraph if @math , and @math is a Steiner graph if @math , and for any pair of vertices @math , the distance between them in @math (denoted @math ) is at least the distance between them in @math (denoted @math ). In this paper we introduce the notion of distance preserver. A subgraph (resp., Steiner graph) @math of a graph @math is a subgraph (resp., Steiner) @math -preserver of @math if for every pair of vertices @math with @math , @math . We show that any graph (resp., digraph) has a subgraph @math -preserver with at most @math edges (resp., arcs), and there are graphs and digraphs for which any undirected Steiner @math -preserver contains @math edges. However, we show that if one allows a directed Steiner (diSteiner) @math -preserver, then these bounds can be improved. Specifically, we show that for any graph or digraph there exists a diSteiner @math -preserver with @math arcs, and that this result is tight up to a constant factor. We also study @math -preserving distance labeling schemes, that are labeling schemes that guarantee precise calculation of distances between pairs of vertices that are at a distance of at least @math one from another. We show that there exists a @math -preserving labeling scheme with labels of size @math , and that labels of size @math are required for any @math -preserving labeling scheme."
]
} |
1507.06240 | 2308861422 | A distance labeling scheme is an assignment of bit-labels to the vertices of an undirected, unweighted graph such that the distance between any pair of vertices can be decoded solely from their labels. We propose a series of new labeling schemes within the framework of so-called hub labeling (HL, also known as landmark labeling or 2-hop-cover labeling), in which each node @math stores its distance to all nodes from an appropriately chosen set of hubs @math . For a queried pair of nodes @math , the length of a shortest @math -path passing through a hub node from @math is then used as an upper bound on the distance between @math and @math . We present a hub labeling which allows us to decode exact distances in sparse graphs using labels of size sublinear in the number of nodes. For graphs with at most @math nodes and average degree @math , the tradeoff between label bit size @math and query decoding time @math for our approach is given by @math , for any @math . Our simple approach is thus the first sublinear-space distance labeling for sparse graphs that simultaneously admits small decoding time (for constant @math , we can achieve any @math while maintaining @math ), and it also provides an improvement in terms of label size with respect to previous slower approaches. By using similar techniques, we then present a @math -additive labeling scheme for general graphs, i.e., one in which the decoder provides a 2-additive-approximation of the distance between any pair of nodes. We achieve almost the same label size-time tradeoff @math , for any @math . To our knowledge, this is the first additive scheme with constant absolute error to use labels of sublinear size. The corresponding decoding time is then small (any @math is sufficient). | For specific classes of graphs, Gavoille al @cite_4 described a @math distance labeling for planar graphs, together with @math lower bound for the same class of graphs. Additionally, @math upper bound for trees and @math lower bound for sparse graphs were given. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2034501275"
],
"abstract": [
"We consider the problem of labeling the nodes of a graph in a way that will allow one to compute the distance between any two nodes directly from their labels (without using any additional information). Our main interest is in the minimal length of labels needed in different cases. We obtain upper and lower bounds for several interesting families of graphs. In particular, our main results are the following. For general graphs, we show that the length needed is Θ(n). For trees, we show that the length needed is Θ(log2 n). For planar graphs, we show an upper bound of O(√nlogn) and a lower bound of Ω(n1 3). For bounded degree graphs, we show a lower bound of Ω(√n). The upper bounds for planar graphs and for trees follow by a more general upper bound for graphs with a r(n)-separator. The two lower bounds, however, are obtained by two different arguments that may be interesting in their own right. We also show some lower bounds on the length of the labels, even if it is only required that distances be approximated to a multiplicative factor s. For example, we show that for general graphs the required length is Ω(n) for every s < 3. We also consider the problem of the time complexity of the distance function once the labels are computed. We show that there are graphs with optimal labels of length 3 log n, such that if we use any labels with fewer than n bits per label, computing the distance function requires exponential time. A similar result is obtained for planar and bounded degree graphs."
]
} |
1507.05533 | 1718159072 | We study security in partial repair in wireless caching networks where parts of the stored packets in the caching nodes are susceptible to be erased. Let us denote a caching node that has lost parts of its stored packets as a sick caching node and a caching node that has not lost any packet as a healthy caching node. In partial repair, a set of caching nodes (among sick and healthy caching nodes) broadcast information to other sick caching nodes to recover the erased packets. The broadcast information from a caching node is assumed to be received without any error by all other caching nodes. All the sick caching nodes then are able to recover their erased packets, while using the broadcast information and the nonerased packets in their storage as side information. In this setting, if an eavesdropper overhears the broadcast channels, it might obtain some information about the stored file. We thus study secure partial repair in the senses of information-theoretically strong and weak security. In both senses, we investigate the secrecy caching capacity, namely, the maximum amount of information which can be stored in the caching network such that there is no leakage of information during a partial repair process. We then deduce the strong and weak secrecy caching capacities, and also derive the sufficient finite field sizes for achieving the capacities. Finally, we propose optimal secure codes for exact partial repair, in which the recovered packets are exactly the same as erased packets. | More related works to our study are in @cite_27 @cite_26 @cite_11 @cite_12 where security in the repair problem of distributed storage systems has been studied. @cite_27 the repair problem in the presence of a passive eavesdropper (who can only intercept the data in a network) and in presence of an active eavesdropper (who can change the data in a network) has been studied and the strong secure codes have been suggested. @cite_12 , the secure regenerating codes using product matrix codes are suggested. Weak secure regenerating codes have been recently studied in @cite_11 . The previous studies of the repair problem in distributed storage systems assume a node or several nodes fail and all their stored packets are lost @cite_2 @cite_17 @cite_15 @cite_10 @cite_24 . Same assumption is also followed in the previous studies of the security in the repair problem @cite_27 @cite_26 @cite_11 @cite_12 . In a recent work, the authors in @cite_14 studied (partial) repair when parts of the packets in storage node are lost. | {
"cite_N": [
"@cite_26",
"@cite_14",
"@cite_17",
"@cite_24",
"@cite_27",
"@cite_2",
"@cite_15",
"@cite_10",
"@cite_12",
"@cite_11"
],
"mid": [
"2071346076",
"2021309919",
"2143174824",
"2111915261",
"2118925326",
"2105185344",
"1979205928",
"2141461253",
"2964275527",
"2023664551"
],
"abstract": [
"We focus on the problem of secure distributed storage over multiple untrusted clouds or networks. Our main contribution is a low complexity scheme that relies on erasure coding techniques for achieving prescribed levels of confidentiality and reliability. Using matrices that have no singular square submatrices, we subject the original data to a linear transformation. The resulting coded symbols are then stored in different networks. This scheme allows users with access to a threshold number of networks to reconstruct perfectly the original data, while ensuring that eavesdroppers with access to any number of networks smaller than this threshold are unable to decode any of the original symbols. This holds even if the attackers are able to guess some of the missing symbols. We further quantify the achievable level of security, and analyze the complexity of the proposed scheme.",
"We investigate the repair problem for wireless caching networks when parts of stored packets in cashing nodes are lost. We first develop theoretical lower bounds on the number of necessary transmission packets over error-free broadcast channels for repair. Then we discuss the impact of the distribution of the lost packets among caching nodes. Finally, we study the construction of repair codes and propose the optimal exact repair for some special scenarios.",
"Erasure coding techniques are used to increase the reliability of distributed storage systems while minimizing storage overhead. Also of interest is minimization of the bandwidth required to repair the system following a node failure. In a recent paper, characterize the tradeoff between the repair bandwidth and the amount of data stored per node. They also prove the existence of regenerating codes that achieve this tradeoff. In this paper, we introduce Exact Regenerating Codes, which are regenerating codes possessing the additional property of being able to duplicate the data stored at a failed node. Such codes require low processing and communication overheads, making the system practical and easy to maintain. Explicit construction of exact regenerating codes is provided for the minimum bandwidth point on the storage-repair bandwidth tradeoff, relevant to distributed-mail-server applications. A subspace based approach is provided and shown to yield necessary and sufficient conditions on a linear code to possess the exact regeneration property as well as prove the uniqueness of our construction. Also included in the paper, is an explicit construction of regenerating codes for the minimum storage point for parameters relevant to storage in peer-to-peer systems. This construction supports a variable number of nodes and can handle multiple, simultaneous node failures. All constructions given in the paper are of low complexity, requiring low field size in particular.",
"When there are multiple storage node failures in distributed storage system, regenerating them individually is suboptimal as far as repair bandwidth minimization is concerned. The tradeoff between storage and repair bandwidth is derived in the case where data exchange among the newcomers is enabled. The tradeoff curve with cooperation is strictly better than the one without cooperation. An explicit construction of cooperative regenerating code is given.",
"We address the problem of securing distributed storage systems against eavesdropping and adversarial attacks. An important aspect of these systems is node failures over time, necessitating, thus, a repair mechanism in order to maintain a desired high system reliability. In such dynamic settings, an important security problem is to safeguard the system from an intruder who may come at different time instances during the lifetime of the storage system to observe and possibly alter the data stored on some nodes. In this scenario, we give upper bounds on the maximum amount of information that can be stored safely on the system. For an important operating regime of the distributed storage system, which we call the bandwidth-limited regime, we show that our upper bounds are tight and provide explicit code constructions. Moreover, we provide a way to short list the malicious nodes and expurgate the system.",
"Distributed storage systems provide reliable access to data through redundancy spread over individually unreliable nodes. Application scenarios include data centers, peer-to-peer storage systems, and storage in wireless networks. Storing data using an erasure code, in fragments spread across nodes, requires less redundancy than simple replication for the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate encoded fragments in a distributed way while transferring as little data as possible across the network. For an erasure coded system, a common practice to repair from a single node failure is for a new node to reconstruct the whole encoded data object to generate just one encoded block. We show that this procedure is sub-optimal. We introduce the notion of regenerating codes, which allow a new node to communicate functions of the stored data from the surviving nodes. We show that regenerating codes can significantly reduce the repair bandwidth. Further, we show that there is a fundamental tradeoff between storage and repair bandwidth which we theoretically characterize using flow arguments on an appropriately constructed graph. By invoking constructive results in network coding, we introduce regenerating codes that can achieve any point in this optimal tradeoff.",
"Regenerating codes are a class of codes proposed for providing reliability of data and efficient repair of failed nodes in distributed storage systems. In this paper, we address the fundamental problem of handling errors and erasures at the nodes or links, during the data-reconstruction and node-repair operations. We provide explicit regenerating codes that are resilient to errors and erasures, and show that these codes are optimal with respect to storage and bandwidth requirements. As a special case, we also establish the capacity of a class of distributed storage systems in the presence of malicious adversaries. While our code constructions are based on previously constructed Product-Matrix codes, we also provide necessary and sufficient conditions for introducing resilience in any regenerating code.",
"This paper studies the recovery from multiple node failures in distributed storage systems. We design a mutually cooperative recovery (MCR) mechanism for multiple node failures. Via a cut-based analysis of the information flow graph, we obtain a lower bound of maintenance bandwidth based on MCR. For MCR, we also propose a transmission scheme and design a linear network coding scheme based on (?, ?) strong-MDS code, which is a generalization of (?, ?) MDS code. We prove that the maintenance bandwidth based on our transmission and coding schemes matches the lower bound, so the lower bound is tight and the transmission scheme and coding scheme for MCR are optimal. We also give numerical comparisons of MCR with other redundancy recovery mechanisms in storage cost and maintenance bandwidth to show the advantage of MCR.",
"Regenerating codes are a class of codes for distributed storage networks that provide reliability and availability of data, and also perform efficient node repair. Another important aspect of a distributed storage network is its security. In this paper, we consider a threat model where an eavesdropper may gain access to the data stored in a subset of the storage nodes, and possibly also, to the data downloaded during repair of some nodes. We provide explicit constructions of regenerating codes that achieve information-theoretic secrecy capacity in this setting.",
""
]
} |
1507.05454 | 2215012921 | This work has been partially supported by the EU (FEDER) and the Spanish Ministerio de Economia y Competitividad under grant TIN2013-44742-C4-1-R and by the Generalitat Valenciana under grant PROMETEOII 2015 013. Part of this research was done while the third author was visiting the University of Reunion; G. Vidal gratefully acknowledges their hospitality. | DBLP:conf iclp MeraLH09 present a framework unifying unit testing and run-time verification for the Ciao system @cite_8 . The @math constraint programming system @cite_4 and SICStus Prolog @cite_2 both provide tools which run a given goal and compute how often program points in the code were executed. SWI-Prolog @cite_9 offers a unit testing tool associated to an optional interactive generation of test cases. It also includes an experimental coverage analysis which runs a given goal and computes the percentage of the used clauses and failing clauses. DBLP:conf issta BelliJ93 and DBLP:conf lopstr DegraveSV08 consider automatic generation of test inputs for strongly typed and moded logic programming languages like the Mercury programming language @cite_1 , whereas we only require moding the top-level predicate of the program. | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_9",
"@cite_1",
"@cite_2"
],
"mid": [
"2007301302",
"2142044962",
"",
"1982243747",
"2007744038"
],
"abstract": [
"ECLiPSe is a Prolog-based programming system, aimed at the development and deployment of constraint programming applications. It is also used for teaching most aspects of combinatorial problem solving, for example, problem modelling, constraint programming, mathematical programming and search techniques. It uses an extended Prolog as its high-level modelling and control language, complemented by several constraint solver libraries, interfaces to third-party solvers, an integrated development environment and interfaces for embedding into host environments. This paper discusses language extensions, implementation aspects, components, and tools that we consider relevant on the way from Logic Programming to Constraint Logic Programming.",
"We provide an overall description of the Ciao multiparadigm programming system emphasizing some of the novel aspects and motivations behind its design and implementation. An important aspect of Ciao is that, in addition to supporting logic programming (and, in particular, Prolog), it provides the programmer with a large number of useful features from different programming paradigms and styles and that the use of each of these features (including those of Prolog) can be turned on and off at will for each program module. Thus, a given module may be using, e.g., higher order functions and constraints, while another module may be using assignment, predicates, Prolog meta-programming, and concurrency. Furthermore, the language is designed to be extensible in a simple and modular way. Another important aspect of Ciao is its programming environment, which provides a powerful preprocessor (with an associated assertion language) capable of statically finding non-trivial bugs, verifying that programs comply with specifications, and performing many types of optimizations (including automatic parallelization). Such optimizations produce code that is highly competitive with other dynamic languages or, with the (experimental) optimizing compiler, even that of static languages, all while retaining the flexibility and interactive development of a dynamic language. This compilation architecture supports modularity and separate compilation throughout. The environment also includes a powerful autodocumenter and a unit testing framework, both closely integrated with the assertion system. The paper provides an informal overview of the language and program development environment. It aims at illustrating the design philosophy rather than at being exhaustive, which would be impossible in a single journal paper, pointing instead to previous Ciao literature.",
"",
"We introduce Mercury, a new purely declarative logic programming language designed to provide the support that groups of application programmers need when building large programs. Mercury's strong type, mode, and determinism systems improve program reliability by catching many errors at compile time. We present a new and relatively simple execution model that takes advantage of the information these systems provide, yielding very efficient code. The Mercury compiler uses this execution model to generate portable C code. Our benchmarking shows that the code generated by our implementation is significantly faster than the code generated by mature optimizing implementations of other logic programming languages.",
"SICStus Prolog has evolved for nearly 25 years. This is an appropriate point in time for revisiting the main language and design decisions, and try to distill some lessons. SICStus Prolog was conceived in a context of multiple, conflicting Prolog dialect camps and a fledgling standardization effort. We reflect on the impact of this effort and role model implementations on our development. After summarizing the development history, we give a guided tour of the system anatomy, exposing some designs that were not published before. We give an overview of our new interactive development environment, and describe a sample of key applications. Finally, we try to identify key good and not so good design decisions."
]
} |
1507.05454 | 2215012921 | This work has been partially supported by the EU (FEDER) and the Spanish Ministerio de Economia y Competitividad under grant TIN2013-44742-C4-1-R and by the Generalitat Valenciana under grant PROMETEOII 2015 013. Part of this research was done while the third author was visiting the University of Reunion; G. Vidal gratefully acknowledges their hospitality. | One of the closest approaches to our work is the test case generation technique by @cite_10 . The main difference, though, is that their technique is based solely on traditional symbolic execution. As mentioned before, concolic testing may scale better since one can deal with more complex constraints by using data from the concrete component of the concolic state. Another difference is that we aim at full path coverage (i.e., choice coverage), and not only a form of statement coverage. | {
"cite_N": [
"@cite_10"
],
"mid": [
"77236829"
],
"abstract": [
"The focus of this tutorial is white-box test case generation TCG based on symbolic execution. Symbolic execution consists in executing a program with the contents of its input arguments being symbolic variables rather than concrete values. A symbolic execution tree characterizes the set of execution paths explored during the symbolic execution of a program. Test cases can be then obtained from the successful branches of the tree. The tutorial is split into three parts: 1 The first part overviews the basic techniques used in TCG to ensure termination, handling heap-manipulating programs, achieving compositionality in the process and guiding TCG towards interesting test cases. 2 In the second part, we focus on a particular implementation of the TCG framework in constraint logic programming CLP. In essense, the imperative object-oriented program under test is automatically transformed into an equivalent executable CLP-translated program. The main advantage of CLP-based TCG is that the standard mechanism of CLP performs symbolic execution for free. The PET system is an open-source software that implements this approach. 3 Finally, in the last part, we study the extension of TCG to actor-based concurrent programs."
]
} |
1507.05454 | 2215012921 | This work has been partially supported by the EU (FEDER) and the Spanish Ministerio de Economia y Competitividad under grant TIN2013-44742-C4-1-R and by the Generalitat Valenciana under grant PROMETEOII 2015 013. Part of this research was done while the third author was visiting the University of Reunion; G. Vidal gratefully acknowledges their hospitality. | Another close approach is @cite_15 , where a concolic execution semantics for logic programs is presented. However, this approach only considers a simpler statement coverage and, thus, it can be seen as a particular instance of the technique in the present paper. Another significant difference is that, in @cite_15 , concolic execution proceeds in a stepwise manner: first, concrete execution produces an execution , which is then used to drive concolic execution. Although this scheme is conceptually simpler, it may give rise to poorer results in practice since one cannot use concrete values in symbolic executions, one of the main advantages of concolic execution over traditional symbolic execution. Moreover, Vid15 presents no formal results nor an implementation of the concolic execution technique. | {
"cite_N": [
"@cite_15"
],
"mid": [
"844671342"
],
"abstract": [
"Symbolic execution extends concrete execution by allowing symbolic input data and then exploring all feasible execution paths. It has been defined and used in the context of many different programming languages and paradigms. A symbolic execution engine is at the heart of many program analysis and transformation techniques, like partial evaluation, test case generation or model checking, to name a few. Despite its relevance, traditional symbolic execution also suffers from several drawbacks. For instance, the search space is usually huge (often infinite) even for the simplest programs. Also, symbolic execution generally computes an overapproximation of the concrete execution space, so that false positives may occur. In this paper, we propose the use of a variant of symbolic execution, called concolic execution, for test case generation in Prolog. Our technique aims at full statement coverage. We argue that this technique computes an underapproximation of the concrete execution space (thus avoiding false positives) and scales up better to medium and large Prolog applications."
]
} |
1507.05810 | 2133205306 | The Datagram Transport Layer Security (DTLS) protocol is the IETF standard for securing the Internet of Things. The Constrained Application Protocol, ZigBee IP, and Lightweight Machine-to-Machine (LWM2M) mandate its use for securing application traffic. There has been much debate in both the standardization and research communities on the applicability of DTLS to constrained environments. The main concerns are the communication overhead and latency of the DTLS handshake, and the memory footprint of a DTLS implementation. This paper provides a thorough performance evaluation of DTLS in different duty-cycled networks through real-world experimentation, emulation and analysis. In particular, we measure the duration of the DTLS handshake when using three duty cycling link-layer protocols: preamble-sampling, the IEEE 802.15.4 beacon-enabled mode and the IEEE 802.15.4e Time Slotted Channel Hopping mode. The reported results demonstrate surprisingly poor performance of DTLS in radio duty-cycled networks. Because a DTLS client and a server exchange more than 10 signaling packets, the DTLS handshake takes between a handful of seconds and several tens of seconds, with similar results for different duty cycling protocols. Moreover, because of their limited memory, typical constrained nodes can only maintain 3-5 simultaneous DTLS sessions, which highlights the need for using DTLS parsimoniously. | Hummen @cite_16 @cite_21 proposed different techniques to lower the impact of the DTLS handshake on constrained devices, certificate pre-validation at the network gateway and handshake delegation to the delegation server''. Raza @cite_17 focused on reducing the per-datagram overhead and proposed a 6LoWPAN DTLS compression scheme. In our previous work, we studied the benefits of a stateless security architecture called OSCAR @cite_8 , based on DTLS, in order to support group communication and caching. | {
"cite_N": [
"@cite_8",
"@cite_16",
"@cite_21",
"@cite_17"
],
"mid": [
"",
"2041950790",
"2019682598",
"2017866470"
],
"abstract": [
"",
"The vision of the Internet of Things considers smart objects in the physical world as first-class citizens of the digital world. Especially IP technology and RESTful web services on smart objects promise simple interactions with Internet services in the Web of Things, e.g., for building automation or in e-health scenarios. Peer authentication and secure data transmission are vital aspects in many of these scenarios to prevent leakage of personal information and harmful actuating tasks. While standard security solutions exist for traditional IP networks, the constraints of smart objects demand for more lightweight security mechanisms. Thus, the use of certificates for peer authentication is predominantly considered impracticable. In this paper, we investigate if this assumption is valid. To this end, we present preliminary overhead estimates for the certificate-based DTLS handshake and argue that certificates - with improvements to the handshake - are a viable method of authentication in many network scenarios. We propose three design ideas to reduce the overheads of the DTLS handshake. These ideas are based on (i) pre-validation, (ii) session resumption, and (iii) handshake delegation. We qualitatively analyze the expected overhead reductions and discuss their applicability.",
"IP technology for resource-constrained devices en- ables transparent end-to-end connections between a vast variety of devices and services in the Internet of Things (IoT). To protect these connections, several variants of traditional IP security protocols have recently been proposed for standardization, most notably the DTLS protocol. In this paper, we identify significant resource requirements for the DTLS handshake when employing public-key cryptography for peer authentication and key agree- ment purposes. These overheads particularly hamper secure com- munication for memory-constrained devices. To alleviate these limitations, we propose a delegation architecture that offloads the expensive DTLS connection establishment to a delegation server. By handing over the established security context to the con- strained device, our delegation architecture significantly reduces the resource requirements of DTLS-protected communication for constrained devices. Additionally, our delegation architecture naturally provides authorization functionality when leveraging the central role of the delegation server in the initial connection establishment. Hence, in this paper, we present a comprehensive, yet compact solution for authentication, authorization, and secure data transmission in the IP-based IoT. The evaluation results show that compared to a public-key-based DTLS handshake our delegation architecture reduces the memory overhead by 64 , computations by 97 , network transmissions by 68 .",
"The Internet of Things (IoT) enables a wide range of application scenarios with potentially critical actuating and sensing tasks, e.g., in the e-health domain. For communication at the application layer, resource-constrained devices are expected to employ the constrained application protocol (CoAP) that is currently being standardized at the Internet Engineering Task Force. To protect the transmission of sensitive information, secure CoAP mandates the use of datagram transport layer security (DTLS) as the underlying security protocol for authenticated and confidential communication. DTLS, however, was originally designed for comparably powerful devices that are interconnected via reliable, high-bandwidth links. In this paper, we present Lithe-an integration of DTLS and CoAP for the IoT. With Lithe, we additionally propose a novel DTLS header compression scheme that aims to significantly reduce the energy consumption by leveraging the 6LoWPAN standard. Most importantly, our proposed DTLS header compression scheme does not compromise the end-to-end security properties provided by DTLS. Simultaneously, it considerably reduces the number of transmitted bytes while maintaining DTLS standard compliance. We evaluate our approach based on a DTLS implementation for the Contiki operating system. Our evaluation results show significant gains in terms of packet size, energy consumption, processing time, and network-wide response times when compressed DTLS is enabled."
]
} |
1507.05810 | 2133205306 | The Datagram Transport Layer Security (DTLS) protocol is the IETF standard for securing the Internet of Things. The Constrained Application Protocol, ZigBee IP, and Lightweight Machine-to-Machine (LWM2M) mandate its use for securing application traffic. There has been much debate in both the standardization and research communities on the applicability of DTLS to constrained environments. The main concerns are the communication overhead and latency of the DTLS handshake, and the memory footprint of a DTLS implementation. This paper provides a thorough performance evaluation of DTLS in different duty-cycled networks through real-world experimentation, emulation and analysis. In particular, we measure the duration of the DTLS handshake when using three duty cycling link-layer protocols: preamble-sampling, the IEEE 802.15.4 beacon-enabled mode and the IEEE 802.15.4e Time Slotted Channel Hopping mode. The reported results demonstrate surprisingly poor performance of DTLS in radio duty-cycled networks. Because a DTLS client and a server exchange more than 10 signaling packets, the DTLS handshake takes between a handful of seconds and several tens of seconds, with similar results for different duty cycling protocols. Moreover, because of their limited memory, typical constrained nodes can only maintain 3-5 simultaneous DTLS sessions, which highlights the need for using DTLS parsimoniously. | The recent work of Capossele @cite_19 explores the idea of abstracting the DTLS handshake as a CoAP resource and implementing the handshake procedure using CoAP methods. The advantage of this approach is that DTLS can leverage the reliability of confirmable CoAP messages, as well as the blockwise transfer for large messages. The drawback, however, is lost backward compatibility with the existing Internet infrastructure. | {
"cite_N": [
"@cite_19"
],
"mid": [
"1481292844"
],
"abstract": [
"The growing number of applications based on Internet of Things (IoT) technologies is pushing towards standardized protocol stacks for machine-to-machine (M2M) communication and the adoption of standard-based security solutions, such as the Datagram Transport Layer Security (DTLS). Despite the huge diffusion of DTLS, there is a lack of optimized implementations tailored to resource constrained devices. High energy consumption and long delays of current implementations limit their effective usage in real-life deployments. The aim of this paper is to explain how to integrate the DTLS protocol inside the Constrained Application Protocol (CoAP), exploiting Elliptic Curve Cryptography (ECC) optimizations and minimizing ROM occupancy. We have implemented our solution on an off-the-shelf mote platform and evaluated its performance. Results show that our ECC optimizations outperform priors scalar multiplication in state of the art for class 1 mote platforms, and improve network lifetime by a factor of up to 6.5 with respect to a standard-based not optimized implementation."
]
} |
1507.05796 | 2950442115 | Error-correcting codes are efficient methods for handling communication channels in the context of technological networks. However, such elaborate methods differ a lot from the unsophisticated way biological entities are supposed to communicate. Yet, it has been recently shown by Feinerman, Haeupler, and Korman [ PODC 2014 ] that complex coordination tasks such as and can plausibly be achieved in biological systems subject to noisy communication channels, where every message transferred through a channel remains intact with small probability @math , without using coding techniques. This result is a considerable step towards a better understanding of the way biological entities may cooperate. It has been nevertheless be established only in the case of 2-valued : rumor spreading aims at broadcasting a single-bit opinion to all nodes, and majority consensus aims at leading all nodes to adopt the single-bit opinion that was initially present in the system with (relative) majority. In this paper, we extend this previous work to @math -valued opinions, for any @math . Our extension requires to address a series of important issues, some conceptual, others technical. We had to entirely revisit the notion of noise, for handling channels carrying @math - messages. In fact, we precisely characterize the type of noise patterns for which plurality consensus is solvable. Also, a key result employed in the bivalued case by is an estimate of the probability of observing the most frequent opinion from observing the mode of a small sample. We generalize this result to the multivalued case by providing a new analytical proof for the bivalued case that is amenable to be extended, by induction, and that is of independent interest. | A general result by @cite_24 shows how to compute a large class of functions in the . However, their protocol requires the nodes to send slightly more complex messages than their sole current opinion, and its effectiveness heavily relies on a potential function argument that does not hold in the presence of noise. | {
"cite_N": [
"@cite_24"
],
"mid": [
"2110100895"
],
"abstract": [
"Over the last decade, we have seen a revolution in connectivity between computers, and a resulting paradigm shift from centralized to highly distributed systems. With massive scale also comes massive instability, as node and link failures become the norm rather than the exception. For such highly volatile systems, decentralized gossip-based protocols are emerging as an approach to maintaining simplicity and scalability while achieving fault-tolerant information dissemination. In this paper, we study the problem of computing aggregates with gossip-style protocols. Our first contribution is an analysis of simple gossip-based protocols for the computation of sums, averages, random samples, quantiles, and other aggregate functions, and we show that our protocols converge exponentially fast to the true answer when using uniform gossip. Our second contribution is the definition of a precise notion of the speed with which a node's data diffuses through the network. We show that this diffusion speed is at the heart of the approximation guarantees for all of the above problems. We analyze the diffusion speed of uniform gossip in the presence of node and link failures, as well as for flooding-based mechanisms. The latter expose interesting connections to random walks on graphs."
]
} |
1507.05472 | 2950523786 | Several scientific and industry applications require High Performance Computing (HPC) resources to process and or simulate complex models. Not long ago, companies, research institutes, and universities used to acquire and maintain on-premise computer clusters; but, recently, cloud computing has emerged as an alternative for a subset of HPC applications. This poses a challenge to end-users, who have to decide where to run their jobs: on local clusters or burst to a remote cloud service provider. While current research on HPC cloud has focused on comparing performance of on-premise clusters against cloud resources, we build on top of existing efforts and introduce an advisory service to help users make this decision considering the trade-offs of resource costs, performance, and availability on hybrid clouds. We evaluated our service using a real test-bed with a seismic processing application based on Full Waveform Inversion; a technique used by geophysicists in the oil & gas industry and earthquake prediction. We also discuss how the advisor can be used for other applications and highlight the main lessons learned constructing this service to reduce costs and turnaround times. | Ostermann @cite_6 evaluated whether performance of cloud environments are sufficient for scientific computing. Their results showed that cloud environments were insufficient for scientific computing at the time of their publication. At the same time, Napper and Bientinesi @cite_8 used Linpack benchmarks to evaluate whether cloud could potentially be included in the top 500 list of supercomputing. Their results showed that the performance of single cloud nodes was as good as nodes in HPC systems, however, memory and network were not sufficient to scale the application. Vecchiola @cite_22 also evaluated the use of clouds for scientific applications. They concluded that clouds are effective for conducting scientific experiments, but the trade-off between costs and performance has to be evaluated case by case. | {
"cite_N": [
"@cite_22",
"@cite_6",
"@cite_8"
],
"mid": [
"2164304411",
"2130062566",
"2108376207"
],
"abstract": [
"Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a Service Level Agreement (SLA), which ensure the desired Quality of Service (QoS). Aneka, an enterprise Cloud computing solution, harnesses the power of compute resources by relying on private and public Clouds and delivers to users the desired QoS. Its flexible and service based infrastructure supports multiple programming paradigms that make Aneka address a variety of different scenarios: from finance applications to computational science. As examples of scientific computing in the Cloud, we present a preliminary case study on using Aneka for the classification of gene expression data and the execution of fMRI brain imaging workflow.",
"Cloud Computing is emerging today as a commercial infrastructure that eliminates the need for maintaining expensive computing hardware. Through the use of virtualization, clouds promise to address with the same shared set of physical resources a large user base with different needs. Thus, clouds promise to be for scientists an alternative to clusters, grids, and supercomputers. However, virtualization may induce significant performance penalties for the demanding scientific computing workloads. In this work we present an evaluation of the usefulness of the current cloud computing services for scientific computing. We analyze the performance of the Amazon EC2 platform using micro-benchmarks and kernels. While clouds are still changing, our results indicate that the current cloud services need an order of magnitude in performance improvement to be useful to the scientific community.",
"Computing as a utility has reached the mainstream. Scientists can now rent time on large commercial clusters through several vendors. The cloud computing model provides flexible support for \"pay as you go\" systems. In addition to no upfront investment in large clusters or supercomputers, such systems incur no maintenance costs. Furthermore, they can be expanded and reduced on-demand in real-time. Current cloud computing performance falls short of systems specifically designed for scientific applications. Scientific computing needs are quite different from those of web applications--composed primarily of database queries--that have been the focus of cloud computing vendors. In this paper we investigate the use of cloud computing for high-performance numerical applications. In particular, we assume unlimited monetary resources to answer the question, \"How high can a cloud computing service get in the TOP500 list?\" We show results for the Linpack benchmark on different allocations on Amazon EC2."
]
} |
1507.05472 | 2950523786 | Several scientific and industry applications require High Performance Computing (HPC) resources to process and or simulate complex models. Not long ago, companies, research institutes, and universities used to acquire and maintain on-premise computer clusters; but, recently, cloud computing has emerged as an alternative for a subset of HPC applications. This poses a challenge to end-users, who have to decide where to run their jobs: on local clusters or burst to a remote cloud service provider. While current research on HPC cloud has focused on comparing performance of on-premise clusters against cloud resources, we build on top of existing efforts and introduce an advisory service to help users make this decision considering the trade-offs of resource costs, performance, and availability on hybrid clouds. We evaluated our service using a real test-bed with a seismic processing application based on Full Waveform Inversion; a technique used by geophysicists in the oil & gas industry and earthquake prediction. We also discuss how the advisor can be used for other applications and highlight the main lessons learned constructing this service to reduce costs and turnaround times. | Following a new phase of HPC cloud studies, Mateescu @cite_5 evaluated a set of benchmarks and complex HPC applications on a range of platforms, both in-house and in the cloud. The studies showed cloud effectiveness for such applications mainly in the case of complementing supercomputers. Gupta and Milojicic @cite_18 highlighted that cloud can be suitable for some HPC applications, but not all. The same group of authors @cite_17 performed a set of experiments using benchmarks and complex HPC applications on various platforms, including supercomputers and clouds, to answer the question why and who should choose cloud for HPC, for what applications, and how should cloud be used for HPC?''. | {
"cite_N": [
"@cite_5",
"@cite_18",
"@cite_17"
],
"mid": [
"2082245129",
"2088445131",
"2624365150"
],
"abstract": [
"We introduce a hybrid High Performance Computing (HPC) infrastructure architecture that provides predictable execution of scientific applications, and scales from a single resource to multiple resources, with different ownership, policy, and geographic locations. We identify three paradigms in the evolution of HPC and high-throughput computing: owner-centric HPC (traditional), Grid computing, and Cloud computing. After analyzing the synergies among HPC, Grid and Cloud computing, we argue for an architecture that combines the benefits of these technologies. We call the building block of this architecture, Elastic Cluster. We describe the concept of Elastic Cluster and show how it can be used to achieve effective and predictable execution of HPC workloads. Then we discuss implementation aspects, and propose a new distributed information system design that combines features of distributed hash tables and relational databases.",
"HPC applications are increasingly being used in academia and laboratories for scientific research and in industries for business and analytics. Cloud computing offers the benefits of virtualization, elasticity of resources and elimination of cluster setup cost and time to HPC applications users. However, poor network performance, performance variation and OS noise are some of the challenges for execution of HPC applications on Cloud. In this paper, we propose that Cloud can be viable platform for some HPC applications depending upon application characteristics such as communication volume and pattern and sensitivity to OS noise and scale. We present an evaluation of the performance and cost tradeoffs of HPC applications on a range of platforms varying from Cloud (with and without virtualization) to HPC-optimized cluster. Our results show that Cloud is viable platform for some applications, specifically, non communicationintensive applications such as embarrassingly parallel and tree-structured computations up to high processor count and for communication-intensive applications up to low processor count.",
""
]
} |
1507.05472 | 2950523786 | Several scientific and industry applications require High Performance Computing (HPC) resources to process and or simulate complex models. Not long ago, companies, research institutes, and universities used to acquire and maintain on-premise computer clusters; but, recently, cloud computing has emerged as an alternative for a subset of HPC applications. This poses a challenge to end-users, who have to decide where to run their jobs: on local clusters or burst to a remote cloud service provider. While current research on HPC cloud has focused on comparing performance of on-premise clusters against cloud resources, we build on top of existing efforts and introduce an advisory service to help users make this decision considering the trade-offs of resource costs, performance, and availability on hybrid clouds. We evaluated our service using a real test-bed with a seismic processing application based on Full Waveform Inversion; a technique used by geophysicists in the oil & gas industry and earthquake prediction. We also discuss how the advisor can be used for other applications and highlight the main lessons learned constructing this service to reduce costs and turnaround times. | More recently, Belgacem and Chopard @cite_23 evaluated a computational fluid dynamics application over a heterogeneous environment of a cluster and cloud. They ratified a previous work @cite_20 that demonstrated the potential of clouds to fluid dynamics. Mantripragada @cite_7 proposed a method to use a hybrid approach using a local cluster plus the cloud to dynamically boost computing intensive applications dealing with data partitioning respecting performance deadlines. The results using a seismic application showed that the approach is feasible in the case of a seamless connection between both environments despite of synchronization concerns. Outside the HPC efforts, Unuvar @cite_2 introduced a hybrid cloud placement algorithm, which focus on application structure to allocate multiple VMs. | {
"cite_N": [
"@cite_20",
"@cite_7",
"@cite_23",
"@cite_2"
],
"mid": [
"2041693760",
"1818445599",
"",
"2038469714"
],
"abstract": [
"In this paper, we report on the results of numerical experiments in the field of computational fluid dynamics (CFD) on Amazon's HPC cloud. To this end, we benchmarked our MPI-parallel fluid solver NaSt3DGPF on several HPC compute nodes of the cloud system. Our solver can use CPUs and GPUs to calculate the simulation results. With a pre-requested number of instances we observe for both, CPUs and GPUs, good scalability even for a larger number of parallel processes, provided that the GPUs run in the non-ECC mode. Furthermore, we see a high potential for medium sized parallel compute problems which are typically present in industrial engineering applications.",
"High intensive computation applications can usually take days to months to nish an execution. During this time, it is common to have variations of the available resources when considering that such hardware is usually shared among a plurality of researchers departments within an organization. On the other hand, High Performance Clusters can take advantage of Cloud Computing bursting techniques for the execution of applications together with on-premise resources. In order to meet deadlines, high intensive computational applications can use the Cloud to boost their performance when they are data and task parallel. This article presents an ongoing work towards the use of extended resources of an HPC execution platform together with Cloud. We propose an unied view of such heterogeneous environments and a method that monitors, predicts the application execution time, and dynamically shifts part of the domain previously running in local HPC hardware to be computed in the Cloud, meeting then a specic deadline. The method is exemplied along with a seismic application that, at runtime, adapts itself to move part of the processing to the Cloud (in a movement called bursting) and also auto-scales (the moved part) over cloud nodes. Our preliminary results show that there is an expected overhead for performing this movement and for synchronizing results, but the outcomes demonstrate it is an important feature for meeting deadlines in the case an on-premise cluster is overloaded or cannot provide the capacity needed for a particular project.",
"",
"A fully functional hybrid cloud solution requires a placement service to automatically decide whether an application should be deployed on premise, in a public cloud, or across private and public clouds. Such a service must consider application structure and communication patterns, application affinity requirements, which usually result from data protection rules, and deployment costs. In this paper, we propose a hybrid cloud placement approach which addresses these challenges. Our approach considers application requirements, cost, and private cloud capacity. Further, it is tunable to allow for changing application patterns and business objectives and offers a useful trade-off between application QoS and its deployment cost."
]
} |
1507.05455 | 1961113256 | The characterisation of time-series data via their most salient features is extremely important in a range of machine learning task, not least of all with regards to classification and clustering. While there exist many feature extraction techniques suitable for non-intermittent time-series data, these approaches are not always appropriate for intermittent time-series data, where intermittency is characterized by constant values for large periods of time punctuated by sharp and transient increases or decreases in value. Motivated by this, we present aggregation, mode decomposition and projection (AMP) a feature extraction technique particularly suited to intermittent time-series data which contain time-frequency patterns. For our method all individual time-series within a set are combined to form a non-intermittent aggregate. This is decomposed into a set of components which represent the intrinsic time-frequency signals within the data set. Individual time-series can then be _t to these components to obtain a set of numerical features that represent their intrinsic time-frequency patterns. To demonstrate the effectiveness of AMP, we evaluate against the real word task of clustering intermittent time-series data. Using synthetically generated data we show that a clustering approach which uses the features derived from AMP significantly outperforms traditional clustering methods. Our technique is further exemplified on a real world data set where AMP can be used to discover groupings of individuals which correspond to real world sub-populations. | Frequency domain based approaches are most commonly underpinned by the discrete Fourier or wavelet transformation of the data. For example Vlachos used periodic features obtained partly via the direct Fourier decomposition for clustering of MSN query log and electrocardiography time-series data @cite_45 . Features derived from wavelet representations have also been used to cluster synthetic and electrical signals @cite_26 . | {
"cite_N": [
"@cite_45",
"@cite_26"
],
"mid": [
"27994497",
"1749514479"
],
"abstract": [
"This work motivates the need for more flexible structural similarity measures between time-series sequences, which are based on the extraction of important periodic features. Specifically, we present non-parametric methods for accurate periodicity detection and we introduce new periodic distance measures for time-series sequences. The goal of these tools and techniques are to assist in detecting, monitoring and visualizing structural periodic changes. It is our belief that these methods can be directly applicable in the manufacturing industry for preventive maintenance and in the medical sciences for accurate classification and anomaly detection.",
"A wavelet-based procedure for clustering signals is proposed. It combines an individual signal preprocessing by wavelet denoising, a dimensionality reduction step by wavelet compression and a classical clustering strategy applied to a suitably chosen set of wavelet coefficients. The ability of wavelets to cope with signals of arbitrary or time-dependent regularity as well as to concentrate signal energy in few large coefficients, offers a useful tool to carry out both significant noise reduction and efficient compression. A simulated example and an electrical dataset are considered to illustrate the value of introducing wavelets for clustering such complex data."
]
} |
1507.05497 | 2952471657 | We propose a new algorithm for recommender systems with numeric ratings which is based on Pattern Structures (RAPS). As the input the algorithm takes rating matrix, e.g., such that it contains movies rated by users. For a target user, the algorithm returns a rated list of items (movies) based on its previous ratings and ratings of other users. We compare the results of the proposed algorithm in terms of precision and recall measures with Slope One, one of the state-of-the-art item-based algorithms, on Movie Lens dataset and RAPS demonstrates the best or comparable quality. | Formal Concept Analysis (FCA) @cite_1 is a powerful algebraic framework for knowledge representation and processing @cite_8 @cite_5 . However, in its original formulation it deals with mainly Boolean data. Even though original numeric data can be represented by so called multi-valued context, it requires concept scaling to be transformed to a plain context (i.e. a binary object-attribute table). There are several extensions of FCA to numeric setting like Fuzzy Formal Concept Analysis @cite_10 @cite_12 . In this paper, to recommend particular user items of interest we use Pattern Structures, an extension of FCA to deal with data that have ordered descriptions. In fact, we use interval pattern structures that were proposed in @cite_13 and successfully applied, e.g., in gene expression data analysis @cite_6 . | {
"cite_N": [
"@cite_13",
"@cite_8",
"@cite_1",
"@cite_6",
"@cite_5",
"@cite_10",
"@cite_12"
],
"mid": [
"1852340955",
"",
"1503729935",
"2074310030",
"2102224599",
"1826636392",
"2046030110"
],
"abstract": [
"Pattern structures consist of objects with descriptions (called patterns) that allow a semilattice operation on them. Pattern structures arise naturally from ordered data, e.g., from labeled graphs ordered by graph morphisms. It is shown that pattern structures can be reduced to formal contexts, however sometimes processing the former is often more efficient and obvious than processing the latter. Concepts, implications, plausible hypotheses, and classifications are defined for data given by pattern structures. Since computation in pattern structures may be intractable, approximations of patterns by means of projections are introduced. It is shown how concepts, implications, hypotheses, and classifications in projected pattern structures are related to those in original ones.",
"",
"From the Publisher: This is the first textbook on formal concept analysis. It gives a systematic presentation of the mathematical foundations and their relation to applications in computer science, especially in data analysis and knowledge processing. Above all, it presents graphical methods for representing conceptual systems that have proved themselves in communicating knowledge. Theory and graphical representation are thus closely coupled together. The mathematical foundations are treated thoroughly and illuminated by means of numerous examples.",
"This paper addresses the important problem of efficiently mining numerical data with formal concept analysis (FCA). Classically, the only way to apply FCA is to binarize the data, thanks to a so-called scaling procedure. This may either involve loss of information, or produce large and dense binary data known as hard to process. In the context of gene expression data analysis, we propose and compare two FCA-based methods for mining numerical data and we show that they are equivalent. The first one relies on a particular scaling, encoding all possible intervals of attribute values, and uses standard FCA techniques. The second one relies on pattern structures without a priori transformation, and is shown to be more computationally efficient and to provide more readable results. Experiments with real-world gene expression data are discussed and give a practical basis for the comparison and evaluation of the methods.",
"This is the first part of a large survey paper in which we analyze recent literature on Formal Concept Analysis (FCA) and some closely related disciplines using FCA. We collected 1072 papers published between 2003 and 2011 mentioning terms related to Formal Concept Analysis in the title, abstract and keywords. We developed a knowledge browsing environment to support our literature analysis process. We use the visualization capabilities of FCA to explore the literature, to discover and conceptually represent the main research topics in the FCA community. In this first part, we zoom in on and give an extensive overview of the papers published between 2003 and 2011 on developing FCA-based methods for knowledge processing. We also give an overview of the literature on FCA extensions such as pattern structures, logical concept analysis, relational concept analysis, power context families, fuzzy FCA, rough FCA, temporal and triadic concept analysis and discuss scalability issues.",
"This paper is a follow up to \"Belohlavek, Vychodil: What is a fuzzy concept lattice?, Proc. CLA 2005, 34-45\", in which we provided a then up-to-date overview of various approaches to fuzzy concept lattices and relationships among them. The main goal of the present paper is different, namely to provide an overview of conceptual issues in fuzzy concept lattices. Emphasized are the issues in which fuzzy concept lattices differ from ordinary concept lattices. In a sense, this paper is written for people familiar with ordinary concept lattices who would like to learn about fuzzy concept lattices. Due to the page limit, the pape.",
"Formal Concept Analysis (FCA) is a mathematical technique that has been extensively applied to Boolean data in knowledge discovery, information retrieval, web mining, etc. applications. During the past years, the research on extending FCA theory to cope with imprecise and incomplete information made significant progress. In this paper, we give a systematic overview of the more than 120 papers published between 2003 and 2011 on FCA with fuzzy attributes and rough FCA. We applied traditional FCA as a text-mining instrument to 1072 papers mentioning FCA in the abstract. These papers were formatted in pdf files and using a thesaurus with terms referring to research topics, we transformed them into concept lattices. These lattices were used to analyze and explore the most prominent research topics within the FCA with fuzzy attributes and rough FCA research communities. FCA turned out to be an ideal metatechnique for representing large volumes of unstructured texts."
]
} |
1507.05497 | 2952471657 | We propose a new algorithm for recommender systems with numeric ratings which is based on Pattern Structures (RAPS). As the input the algorithm takes rating matrix, e.g., such that it contains movies rated by users. For a target user, the algorithm returns a rated list of items (movies) based on its previous ratings and ratings of other users. We compare the results of the proposed algorithm in terms of precision and recall measures with Slope One, one of the state-of-the-art item-based algorithms, on Movie Lens dataset and RAPS demonstrates the best or comparable quality. | The paper is organised as follows. In Section , basic FCA definitions and interval pattern structures are introduced. Section describes SlopeOne @cite_14 and RAPS with examples. In Section , we provide the results of experiments with time performance and precision-recall evaluation for MovieLens dataset. Section concludes the paper. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2143520694"
],
"abstract": [
"Online evaluation is amongst the few evaluation techniques available to the information retrieval community that is guaranteed to reflect how users actually respond to improvements developed by the community. Broadly speaking, online evaluation refers to any evaluation of retrieval quality conducted while observing user behavior in a natural context. However, it is rarely employed outside of large commercial search engines due primarily to a perception that it is impractical at small scales. The goal of this tutorial is to familiarize information retrieval researchers with state-of-the-art techniques in evaluating information retrieval systems based on natural user clicking behavior, as well as to show how such methods can be practically deployed. In particular, our focus will be on demonstrating how the Interleaving approach and other click based techniques contrast with traditional offline evaluation, and how these online methods can be effectively used in academic-scale research. In addition to lecture notes, we will also provide sample software and code walk-throughs to showcase the ease with which Interleaving and other click-based methods can be employed by students, academics and other researchers."
]
} |
1507.05290 | 2289775056 | Good parametrisations of affine transformations are essential to interpolation, deformation, and analysis of shape, motion, and animation. It has been one of the central research topics in computer graphics. However, there is no single perfect method and each one has both advantages and disadvantages. In this paper, we propose a novel parametrisation of affine transformations, which is a generalisation to or an improvement of existing methods. Our method adds yet another choice to the existing toolbox and shows better performance in some applications. A C++ implementation is available to make our framework ready to use in various applications. | The group of rigid transformations @math can be parametrised with the dual quaternions, which is a generalisation of the parametrisation of @math by the quaternions. In @cite_12 , they used the dual quaternions to blend rigid transformations and their method works particularly well in skinning. However, for other applications we may be troubled by the complicated structure of the parameter space; the space of the unit dual quaternions is the semi-direct product of the group of the unit quaternions and @math . | {
"cite_N": [
"@cite_12"
],
"mid": [
"2090782210"
],
"abstract": [
"Skinning of skelet ally deformable models is extensively used for real-time animation of characters, creatures and similar objects. The standard solution, linear blend skinning, has some serious drawbacks that require artist intervention. Therefore, a number of alternatives have been proposed in recent years. All of them successfully combat some of the artifacts, but none challenge the simplicity and efficiency of linear blend skinning. As a result, linear blend skinning is still the number one choice for the majority of developers. In this article, we present a novel skinning algorithm based on linear combination of dual quaternions. Even though our proposed method is approximate, it does not exhibit any of the artifacts inherent in previous methods and still permits an efficient GPU implementation. Upgrading an existing animation system from linear to dual quaternion skinning is very easy and has a relatively minor impact on runtime performance."
]
} |
1507.05290 | 2289775056 | Good parametrisations of affine transformations are essential to interpolation, deformation, and analysis of shape, motion, and animation. It has been one of the central research topics in computer graphics. However, there is no single perfect method and each one has both advantages and disadvantages. In this paper, we propose a novel parametrisation of affine transformations, which is a generalisation to or an improvement of existing methods. Our method adds yet another choice to the existing toolbox and shows better performance in some applications. A C++ implementation is available to make our framework ready to use in various applications. | Let @math be the group of 3D affine transformations with positive determinants. Note that an (invertible) affine transformation has the positive determinant if and only if it contains no flip (or reflection). In @cite_18 a method to interpolate elements of @math using the polar decomposition was introduced. Transformations are decomposed into the rotation, the scale-shear, and the translation parts, and then SLERP was used for interpolating the rotation part and the linear interpolation was used for the rest. This idea to look at @math as the (non-direct) product of three spaces @math , and @math has been fundamental and many of the current graphics systems adopt it. However, the parametrisations of @math by the quaternions and @math by matrices are not Euclidean. | {
"cite_N": [
"@cite_18"
],
"mid": [
"19910844"
],
"abstract": [
"General 3×3 linear or 4×4 homogenous matrices can be formed by composing primitive matrices for translation, rotation, scale, shear, and perspective. Current 3-D computer graphics systems manipulate and interpolate parametric forms of these primitives to generate scenes and motion. For this and other reasons, decomposing a composite matrix in a meaningful way has been a longstanding challenge. This paper presents a theory and method for doing so, proposing that the central issue is rotation extraction, and that the best way to do that is Polar Decomposition. This method also is useful for renormalizing a rotation matrix containing excessive error."
]
} |
1507.05290 | 2289775056 | Good parametrisations of affine transformations are essential to interpolation, deformation, and analysis of shape, motion, and animation. It has been one of the central research topics in computer graphics. However, there is no single perfect method and each one has both advantages and disadvantages. In this paper, we propose a novel parametrisation of affine transformations, which is a generalisation to or an improvement of existing methods. Our method adds yet another choice to the existing toolbox and shows better performance in some applications. A C++ implementation is available to make our framework ready to use in various applications. | On the other hand, in @cite_42 a definition of scalar multiple and addition in @math is given based on the idea to parametrise @math by the corresponding Lie algebra, which gives a Euclidean parameter space. This is a generalisation of @cite_10 , where @math is parametrised by its Lie algebra. The same idea is also used in @cite_38 . A notable feature of their construction, which is missing in @cite_18 , is that the scalar multiplication satisfies associativity.'' That is, for @math , the @math -multiple of the @math -multiple of a transformation is equal to the @math -multiple of it. However, a major defect of their construction is that it does not work with transformations with negative real eigenvalues. That is, there is no representatives for some transformations and the condition (III) does not hold. This causes a big problem (see ). Our work can be considered as a workaround of this inconvenience at the cost of loosing the associativity. In addition, our parametrisation comes with a few advantages including a fast closed formula and better handling of large rotation. | {
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_42",
"@cite_10"
],
"mid": [
"",
"19910844",
"2245379371",
"1977259876"
],
"abstract": [
"",
"General 3×3 linear or 4×4 homogenous matrices can be formed by composing primitive matrices for translation, rotation, scale, shear, and perspective. Current 3-D computer graphics systems manipulate and interpolate parametric forms of these primitives to generate scenes and motion. For this and other reasons, decomposing a composite matrix in a meaningful way has been a longstanding challenge. This paper presents a theory and method for doing so, proposing that the central issue is rotation extraction, and that the best way to do that is Polar Decomposition. This method also is useful for renormalizing a rotation matrix containing excessive error.",
"Geometric transformations are most commonly represented as square matrices in computer graphics. Following simple geometric arguments we derive a natural and geometrically meaningful definition of scalar multiples and a commutative addition of transformations based on the matrix representation, given that the matrices have no negative real eigenvalues. Together, these operations allow the linear combination of transformations. This provides the ability to create weighted combination of transformations, interpolate between transformations, and to construct or use arbitrary transformations in a structure similar to a basis of a vector space. These basic techniques are useful for synthesis and analysis of motions or animations. Animations through a set of key transformations are generated using standard techniques such as subdivision curves. For analysis and progressive compression a PCA can be applied to sequences of transformations. We describe an implementation of the techniques that enables an easy-to-use and transparent way of dealing with geometric transformations in graphics software. We compare and relate our approach to other techniques such as matrix decomposition and quaternion interpolation.",
"Parameterizing three degree-of-freedom (DOF) rotations is difficult to do well. Many graphics applications demand that we be able to compute and differentiate positions and orientations of articulated figures with respect to their rotational (and other) parameters, as well as integrate differential equations, optimize rotation parameters, and interpolate orientations. Widely used parameterizations such as Euler angles and quaternions are well suited to only a few of these operations. The exponential map maps a vector in R 3 describing the axis and magnitude of a three-DOF rotation to the corresponding rotation. Several graphics researchers have applied it with limited success to interpolation of orientations, but it has been virtually ignored with respect to the other operations mentioned above. In this paper we present formulae for computing, differentiating, and integrating three-DOF rotations with the exponential map. We show that our formulation is numerically stable in the face of machine precision issues, and that for most applications all singularities in the map can be avoided through a simple technique of dynamic reparameterization. We demonstrate how to use the exponential map to solve both the \"freely rotating body\" problem, and the important ball-and-socket joint required to accurately model shoulder and hip joints in articulated figures. Examining several common graphics applications, we explain the benefits of our formulation of the exponential map over Euler angles and quaternions, including robustness, small state vectors, lack of explicit constraints, good modeling capabilities, simplicity of solving ordinary differential equations, and good interpolation behavior."
]
} |
1507.04831 | 1964448421 | Automatic speaker naming is the problem of localizing as well as identifying each speaking character in a TV movie live show video. This is a challenging problem mainly attributes to its multimodal nature, namely face cue alone is insufficient to achieve good performance. Previous multimodal approaches to this problem usually process the data of different modalities individually and merge them using handcrafted heuristics. Such approaches work well for simple scenes, but fail to achieve high performance for speakers with large appearance variations. In this paper, we propose a novel convolutional neural networks (CNN) based learning framework to automatically learn the fusion function of both face and audio cues. We show that without using face tracking, facial landmark localization or subtitle transcript, our system with robust multimodal feature extraction is able to achieve state-of-the-art speaker naming performance evaluated on two diverse TV series. The dataset and implementation of our algorithm are publicly available online. | Automatic SN in TV series, movies and live shows has received increasing attention in the past decade. In previous works like @cite_1 , SN was considered as an automatic face recognition problem. Recently, more researchers have tried to make use of video context to boost performance. Most of these works focused on . In @cite_8 , cast members are automatically labelled by detecting speakers and aligning subtitles transcripts to obtain identities. This approach had been adapted and further refined by @cite_16 . Bauml @cite_12 use a similar method to automatically obtain labels for those face tracks that can be detected as speaking. However, these labels are typically noisy and incomplete (i.e., usually only @math -$30 is mainly due to that speaker detection relies heavily on lip movement detection, which is not reliable for videos of low quality or with large face pose variation. | {
"cite_N": [
"@cite_16",
"@cite_1",
"@cite_12",
"@cite_8"
],
"mid": [
"2121027212",
"2105303679",
"2093153344",
"2168996682"
],
"abstract": [
"We investigate the problem of automatically labelling faces of characters in TV or movie material with their names, using only weak supervision from automatically-aligned subtitle and script text. Our previous work ( [8]) demonstrated promising results on the task, but the coverage of the method (proportion of video labelled) and generalization was limited by a restriction to frontal faces and nearest neighbour classification. In this paper we build on that method, extending the coverage greatly by the detection and recognition of characters in profile views. In addition, we make the following contributions: (i) seamless tracking, integration and recognition of profile and frontal detections, and (ii) a character specific multiple kernel classifier which is able to learn the features best able to discriminate between the characters. We report results on seven episodes of the TV series \"Buffy the Vampire Slayer\", demonstrating significantly increased coverage and performance with respect to previous methods on this material.",
"Naming faces is important for news videos browsing and indexing. Although some research efforts have been contributed to it, they only use the concurrent information between the face and name or employ some clues as features and use simple heuristic method or machine learning approach to finish the task. They use little extra knowledge about the names and faces. Different from previous work, in this paper we present a novel approach to name the faces by exploring extra knowledge obtained from image google. The behind assumption is that the faces of those important persons will turn out many times in the web images and could be retrieved from image google easily. Firstly, faces are detected in the video frames; and the name entities of candidate persons are extracted from the textual information by automatic speech recognition and close caption detection. Then, these candidate person names are used as queries to find the name related person images through image google. After that, the retrieved result is analyzed and some typical faces are selected through feature density estimation. Finally, the detected faces in the news video are matched with the faces selected from the result returned by image google to label each face. Experimental results on MSNBC news and CNN news demonstrate that the proposed approach is effective.",
"We address the problem of person identification in TV series. We propose a unified learning framework for multi-class classification which incorporates labeled and unlabeled data, and constraints between pairs of features in the training. We apply the framework to train multinomial logistic regression classifiers for multi-class face recognition. The method is completely automatic, as the labeled data is obtained by tagging speaking faces using subtitles and fan transcripts of the videos. We demonstrate our approach on six episodes each of two diverse TV series and achieve state-of-the-art performance.",
"We investigate the problem of automatically labelling appearances of characters in TV or film material. This is tremendously challenging due to the huge variation in imaged appearance of each character and the weakness and ambiguity of available annotation. However, we demonstrate that high precision can be achieved by combining multiple sources of information, both visual and textual. The principal novelties that we introduce are: (i) automatic generation of time stamped character annotation by aligning subtitles and transcripts; (ii) strengthening the supervisory information by identifying when characters are speaking; (iii) using complementary cues of face matching and clothing matching to propose common annotations for face tracks. Results are presented on episodes of the TV series “Buffy the Vampire Slayer”."
]
} |
1507.04831 | 1964448421 | Automatic speaker naming is the problem of localizing as well as identifying each speaking character in a TV movie live show video. This is a challenging problem mainly attributes to its multimodal nature, namely face cue alone is insufficient to achieve good performance. Previous multimodal approaches to this problem usually process the data of different modalities individually and merge them using handcrafted heuristics. Such approaches work well for simple scenes, but fail to achieve high performance for speakers with large appearance variations. In this paper, we propose a novel convolutional neural networks (CNN) based learning framework to automatically learn the fusion function of both face and audio cues. We show that without using face tracking, facial landmark localization or subtitle transcript, our system with robust multimodal feature extraction is able to achieve state-of-the-art speaker naming performance evaluated on two diverse TV series. The dataset and implementation of our algorithm are publicly available online. | In @cite_11 , each TV series episode is modeled as a Markov Random Field, which integrates face recognition, clothing appearance, speaker recognition and contextual constraints in a probabilistic manner. The identification task is then formulated as an energy minimization problem. In @cite_10 @cite_7 , person naming is resolved by a statistical learning or multiple instances learning framework. Bojanowski @cite_6 utilize scripts as weak supervision to learn a joint model of actors and actions in movies for character naming. Although these methods try to solve character naming or SN problem in new machine learning frameworks, they still heavily rely on accurate face person tracking, motion detection, landmark detection and aligned transcripts or captions. | {
"cite_N": [
"@cite_10",
"@cite_6",
"@cite_7",
"@cite_11"
],
"mid": [
"2064125229",
"2119031011",
"",
"2055251102"
],
"abstract": [
"Naming every individual person appearing in broadcast news videos with names detected from the video transcript leads to better access of the news video content. In this paper, we approach this challenging problem with a statistical learning method. Two categories of information extracted from multiple video modalities have been explored, namely features, which help distinguish the true name of every person, as well as constraints, which reveal the relationships among the names of different persons. The person-naming problem is formulated into a learning framework which predicts the most likely name for each person based on the features, and refines the predictions using the constraints. Experiments conducted on ABC World New Tonight and CNN Headline News videos demonstrate that this approach outperforms a non-learning alternative by a large amount.",
"We address the problem of learning a joint model of actors and actions in movies using weak supervision provided by scripts. Specifically, we extract actor action pairs from the script and use them as constraints in a discriminative clustering framework. The corresponding optimization problem is formulated as a quadratic program under linear constraints. People in video are represented by automatically extracted and tracked faces together with corresponding motion features. First, we apply the proposed framework to the task of learning names of characters in the movie and demonstrate significant improvements over previous methods used for this task. Second, we explore the joint actor action constraint and show its advantage for weakly supervised action learning. We validate our method in the challenging setting of localizing and recognizing characters and their actions in feature length movies Casablanca and American Beauty.",
"",
"We describe a probabilistic method for identifying characters in TV series or movies. We aim at labeling every character appearance, and not only those where a face can be detected. Consequently, our basic unit of appearance is a person track (as opposed to a face track). We model each TV series episode as a Markov Random Field, integrating face recognition, clothing appearance, speaker recognition and contextual constraints in a probabilistic manner. The identification task is then formulated as an energy minimization problem. In order to identify tracks without faces, we learn clothing models by adapting available face recognition results. Within a scene, as indicated by prior analysis of the temporal structure of the TV series, clothing features are combined by agglomerative clustering. We evaluate our approach on the first 6 episodes of The Big Bang Theory and achieve an absolute improvement of 20 for person identification and 12 for face recognition."
]
} |
1507.05106 | 1430582609 | We show how to compute any symmetric Boolean function on n variables over any field (as well as the integers) with a probabilistic polynomial of degree O(a#x221A;n log(1 a#x03B5;))) and error at most a#x03B5;. The degree dependence on n and a#x03B5; is optimal, matching a lower bound of Razborov (1987) and Smolensky (1987) for the MAJORITY function. The proof is constructive: a low-degree polynomial can be efficiently sampled from the distribution. This polynomial construction is combined with other algebraic ideas to give the first sub quadratic time algorithm for computing a (worst-case) batch of Hamming distances in super logarithmic dimensions, exactly. To illustrate, let c(n): N -> N. Suppose we are given a database D of n vectors in 0, 1 (c(n) log n) and a collection of n query vectors Q in the same dimension. For all u in Q, we wish to compute a v in D with minimum Hamming distance from u. We solve this problem in n(2-1 O(c(n) log2 c(n))) randomized time. Hence, the problem is in "truly sub quadratic" time for O(log n) dimensions, and in sub quadratic time for d = o((log2 n) (log log n)2). We apply the algorithm to computing pairs with maximum inner product, closest pair in l1 for vectors with bounded integer entries, and pairs with maximum Jaccard coefficients. | The planted'' case of Hamming distance has been studied extensively in learning theory and cryptography. In this setting, all vectors are chosen uniformly at random, except for a planted pair of vectors with Hamming distance much smaller than the expected distance between two random vectors. Two recent references are notable: G. Valiant @cite_2 gave a breakthrough @math time algorithm, which is of the vector dimension and the Hamming distance of the planted pair. Valiant also gives a @math -approximation to the closest pair problem in Hamming distance running in @math time. See @cite_11 for very recent work on batch Hamming distance computations in cryptoanalysis. | {
"cite_N": [
"@cite_11",
"@cite_2"
],
"mid": [
"566315627",
"2023683873"
],
"abstract": [
"We propose a new decoding algorithm for random binary linear codes. The so-called information set decoding algorithm of Prange (1962) achieves worst-case complexity (2^ 0.121n ). In the late 80s, Stern proposed a sort-and-match version for Prange’s algorithm, on which all variants of the currently best known decoding algorithms are build. The fastest algorithm of Becker, Joux, May and Meurer (2012) achieves running time (2^ 0.102n ) in the full distance decoding setting and (2^ 0.0494n ) with half (bounded) distance decoding.",
"Given a set of @math @math -dimensional Boolean vectors with the promise that the vectors are chosen uniformly at random with the exception of two vectors that have Pearson -- correlation @math (Hamming distance @math ), how quickly can one find the two correlated vectors? We present an algorithm which, for any constants @math and @math finds the correlated pair with high probability, and runs in time @math given @math vectors in @math , our algorithm returns a pair of vectors whose Euclidean distance differs from that of the closest pair by a factor of at most @math and runs in time @math . The best previous algorithms (including LSH) have runtime @math Learning Sparse Parity with Noise: Given samples from an instance of the learning parity with noise problem where each example has length @math , the true parity set has size at most @math . Learning @math -Juntas without Noise: Our results for learning sparse parities with noise imply an algorithm for learning juntas without noise with runtime $n^ + 4 k poly(n) which improves on the runtime n !+1 ! poly(n) n0:7kpoly(n) of [13]."
]
} |
1507.05150 | 2216776969 | Personalizing image tags is a relatively new and growing area of research, and in order to advance this research community, we must review and challenge the de-facto standard of defining tag importance. We believe that for greater progress to be made, we must go beyond tags that merely describe objects that are visually represented in the image, towards more user-centric and subjective notions such as emotion, sentiment, and preferences. We focus on the notion of user preferences and show that the order that users list tags on images is correlated to the order of preference over the tags that they provided for the image. While this observation is not completely surprising, to our knowledge, we are the first to explore this aspect of user tagging behavior systematically and report empirical results to support this observation. We argue that this observation can be exploited to help advance the image tagging (and related) communities. Our contributions include: 1.) conducting a user study demonstrating this observation, 2.) collecting a dataset with user tag preferences explicitly collected. | The implicit approaches are usually more common than their explicit counterparts because one does not have to learn how to recognize or detect specific objects in the image, which as earlier noted is not scalable, also not all concepts one would like to use in describing an image are necessarily visual (semantic gap) @cite_8 @cite_12 . Also, with the implicit approaches one could imagine a latent space that more readily embeds some sense of relatedness @cite_19 , while on the explicit end, it is harder to extrapolate a measure of relatedness between objects of different classes. | {
"cite_N": [
"@cite_19",
"@cite_12",
"@cite_8"
],
"mid": [
"21006490",
"",
"2130660124"
],
"abstract": [
"Image annotation datasets are becoming larger and larger, with tens of millions of images and tens of thousands of possible annotations. We propose a strongly performing method that scales to such datasets by simultaneously learning to optimize precision at the top of the ranked list of annotations for a given image and learning a low-dimensional joint embedding space for both images and annotations. Our method, called WSABIE, both outperforms several baseline methods and is faster and consumes less memory.",
"",
"Presents a review of 200 references in content-based image retrieval. The paper starts with discussing the working conditions of content-based retrieval: patterns of use, types of pictures, the role of semantics, and the sensory gap. Subsequent sections discuss computational steps for image retrieval systems. Step one of the review is image processing for retrieval sorted by color, texture, and local geometry. Features for retrieval are discussed next, sorted by: accumulative and global features, salient points, object and shape features, signs, and structural combinations thereof. Similarity of pictures and objects in pictures is reviewed for each of the feature types, in close connection to the types and means of feedback the user of the systems is capable of giving by interaction. We briefly discuss aspects of system engineering: databases, system architecture, and evaluation. In the concluding section, we present our view on: the driving force of the field, the heritage from computer vision, the influence on computer vision, the role of similarity and of interaction, the need for databases, the problem of evaluation, and the role of the semantic gap."
]
} |
1507.05150 | 2216776969 | Personalizing image tags is a relatively new and growing area of research, and in order to advance this research community, we must review and challenge the de-facto standard of defining tag importance. We believe that for greater progress to be made, we must go beyond tags that merely describe objects that are visually represented in the image, towards more user-centric and subjective notions such as emotion, sentiment, and preferences. We focus on the notion of user preferences and show that the order that users list tags on images is correlated to the order of preference over the tags that they provided for the image. While this observation is not completely surprising, to our knowledge, we are the first to explore this aspect of user tagging behavior systematically and report empirical results to support this observation. We argue that this observation can be exploited to help advance the image tagging (and related) communities. Our contributions include: 1.) conducting a user study demonstrating this observation, 2.) collecting a dataset with user tag preferences explicitly collected. | With regards to personalization in image tagging, in the work by @cite_17 , they assume that tag mentioned are preferred to those unmentioned. This is similar to our assumption but they treat the tags that appear on a tagging list equally, which our work here suggests not to do. In the work by @cite_3 they also treat tags as essentially structureless entities (bag of words), and other work on personalization similarly treat user provided tag lists as such @cite_3 @cite_7 . To our knowledge, we are the first to suggest that these user provided list should be treated as having structure. | {
"cite_N": [
"@cite_7",
"@cite_3",
"@cite_17"
],
"mid": [
"",
"2139420544",
"2089349245"
],
"abstract": [
"",
"Despite all of the advantages of tags as an easy and flexible information management approach, tagging is a cumbersome task. A set of descriptive tags has to be manually entered by users whenever they post a resource. This process can be simplified by the use of tag recommendation systems. Their objective is to suggest potentially useful tags to the user. We present a hybrid tag recommendation system together with a scalable, highly efficient system architecture. The system is able to utilize user feedback to tune its parameters to specific characteristics of the underlying tagging system and adapt the recommendation models to newly added content. The evaluation of the system on six real-life datasets demonstrated the system’s ability to combine tags from various sources (e.g., resource content or tags previously used by the user) to achieve the best quality of recommended tags. It also confirmed the importance of parameter tuning and content adaptation. A series of additional experiments allowed us to better understand the characteristics of the system and tagging datasets and to determine the potential areas for further system development.",
"Tagging plays an important role in many recent websites. Recommender systems can help to suggest a user the tags he might want to use for tagging a specific item. Factorization models based on the Tucker Decomposition (TD) model have been shown to provide high quality tag recommendations outperforming other approaches like PageRank, FolkRank, collaborative filtering, etc. The problem with TD models is the cubic core tensor resulting in a cubic runtime in the factorization dimension for prediction and learning. In this paper, we present the factorization model PITF (Pairwise Interaction Tensor Factorization) which is a special case of the TD model with linear runtime both for learning and prediction. PITF explicitly models the pairwise interactions between users, items and tags. The model is learned with an adaption of the Bayesian personalized ranking (BPR) criterion which originally has been introduced for item recommendation. Empirically, we show on real world datasets that this model outperforms TD largely in runtime and even can achieve better prediction quality. Besides our lab experiments, PITF has also won the ECML PKDD Discovery Challenge 2009 for graph-based tag recommendation."
]
} |
1507.05224 | 2950252067 | Which topics spark the most heated debates on social media? Identifying those topics is not only interesting from a societ al point of view, but also allows the filtering and aggregation of social media content for disseminating news stories. In this paper, we perform a systematic methodological study of controversy detection by using the content and the network structure of social media. Unlike previous work, rather than study controversy in a single hand-picked topic and use domain specific knowledge, we take a general approach to study topics in any domain. Our approach to quantifying controversy is based on a graph-based three-stage pipeline, which involves (i) building a conversation graph about a topic; (ii) partitioning the conversation graph to identify potential sides of the controversy; and (iii) measuring the amount of controversy from characteristics of the graph. We perform an extensive comparison of controversy measures, different graph-building approaches, and data sources. We use both controversial and non-controversial topics on Twitter, as well as other external datasets. We find that our new random-walk-based measure outperforms existing ones in capturing the intuitive notion of controversy, and show that content features are vastly less helpful in this task. | Similarly to the papers discussed above, in our work we quantify controversy based on the graph structure of social interactions. In particular, we assume that controversial and polarized topics induce graphs with clustered structure, representing different opinions and points of view. This assumption relies on the concept of echo chambers,'' which states that opinions or beliefs stay inside communities created by like-minded people, who reinforce and endorse the opinions of each other. This phenomenon has been quantified in many recent studies @cite_17 @cite_28 @cite_5 . A different direction for quantifying controversy followed by and relies on text and sentiment analysis. Both studies focus on language found on news articles. In our case, since we are mainly working with Twitter, where text is short and noisy, and since we are aiming at quantifying controversy in a domain-agnostic manner, text analysis has its limitations. Nevertheless, we experiment with incorporating content features in our approach. | {
"cite_N": [
"@cite_28",
"@cite_5",
"@cite_17"
],
"mid": [
"2303924600",
"2056284671",
"2949098399"
],
"abstract": [
"Online publishing, social networks, and web search have dramatically lowered the costs of producing, distributing, and discovering news articles. Some scholars argue that such technological changes increase exposure to diverse perspectives, while others worry that they increase ideological segregation. We address the issue by examining webbrowsing histories for 50,000 US-located users who regularly read online news. We find that social networks and search engines are associated with an increase in the mean ideological distance between individuals. However, somewhat counterintuitively, these same channels also are associated with an increase in an individual's exposure to material from his or her less preferred side of the political spectrum. Finally, the vast majority of online news consumption is accounted for by individuals simply visiting the home pages of their favorite, typically mainstream, news outlets, tempering the consequences -- both positive and negative -- of recent technological changes. We thus uncover evidence for both sides of the debate, while also finding that the magnitude of the effects is relatively modest",
"Most people associate with people like themselves, a process called homophily. Exposure to diversity, however, makes us more informed as individuals and as a society. In this paper, we investigate political disagreements on Facebook to explore the conditions under which diverse opinions can coexist online. Via a mixed methods approach comprising 103 survey responses and 13 interviews with politically engaged American social media users, we found that participants who perceived more differences with their friends engaged less on Facebook than those who perceived more homogeneity. Weak ties were particularly brittle to political disagreements, despite being the ties most likely to offer diversity. Finally, based on our findings we suggest potential design opportunities to bridge across ideological difference: 1) support exposure to weak ties; and 2) make common ground visible while friends converse.",
"The hypothesis of selective exposure assumes that people seek out information that supports their views and eschew information that conflicts with their beliefs, and that has negative consequences on our society. Few researchers have recently found counter evidence of selective exposure in social media: users are exposed to politically diverse articles. No work has looked at what happens after exposure, particularly how individuals react to such exposure, though. Users might well be exposed to diverse articles but share only the partisan ones. To test this, we study partisan sharing on Facebook: the tendency for users to predominantly share like-minded news articles and avoid conflicting ones. We verified four main hypotheses. That is, whether partisan sharing: 1) exists at all; 2) changes across individuals (e.g., depending on their interest in politics); 3) changes over time (e.g., around elections); and 4) changes depending on perceived importance of topics. We indeed find strong evidence for partisan sharing. To test whether it has any consequence in the real world, we built a web application for BBC viewers of a popular political program, resulting in a controlled experiment involving more than 70 individuals. Based on what they share and on survey data, we find that partisan sharing has negative consequences: distorted perception of reality. However, we do also find positive aspects of partisan sharing: it is associated with people who are more knowledgeable about politics and engage more with it as they are more likely to vote in the general elections."
]
} |
1507.05224 | 2950252067 | Which topics spark the most heated debates on social media? Identifying those topics is not only interesting from a societ al point of view, but also allows the filtering and aggregation of social media content for disseminating news stories. In this paper, we perform a systematic methodological study of controversy detection by using the content and the network structure of social media. Unlike previous work, rather than study controversy in a single hand-picked topic and use domain specific knowledge, we take a general approach to study topics in any domain. Our approach to quantifying controversy is based on a graph-based three-stage pipeline, which involves (i) building a conversation graph about a topic; (ii) partitioning the conversation graph to identify potential sides of the controversy; and (iii) measuring the amount of controversy from characteristics of the graph. We perform an extensive comparison of controversy measures, different graph-building approaches, and data sources. We use both controversial and non-controversial topics on Twitter, as well as other external datasets. We find that our new random-walk-based measure outperforms existing ones in capturing the intuitive notion of controversy, and show that content features are vastly less helpful in this task. | Finally, our findings on controversy have many potential applications on news-reading and public-debate scenarios. For instance, quantifying controversy can provide a basis for analyzing the news diet'' of readers @cite_15 @cite_0 , offering the chance of better information by providing recommendations of contrarian views @cite_7 , deliberating debates @cite_2 , and connecting people with opposing opinions @cite_30 @cite_22 . | {
"cite_N": [
"@cite_30",
"@cite_22",
"@cite_7",
"@cite_0",
"@cite_2",
"@cite_15"
],
"mid": [
"2032824579",
"1571597797",
"2182265315",
"1904709910",
"1514522744",
"2342213273"
],
"abstract": [
"Modern social media have increasingly helped people separate themselves by worldview. We watch television shows and follow blogs that agree with our views, and read Twitter streams of people we like. The result is often called the echo chamber. Scholars cite political echo chambers as partly to blame for the divisive and destructive U.S. political climate. In this paper, we introduce a mobile application called Political Blend designed to combat echo chambers: it brings people with different political beliefs together for a cup of coffee. Based on interviews, we discovered that people are open to this kind of application and feel it may help the broader political environment. The primary contribution of this work is evidence that people are open to meeting those different from them, even those who ideologically oppose them. In an environment dominated by applications matching based on similarities, we see that this is an important finding.",
"ABSTRACT Social networks allow people to connect with each otherand have conversations on a wide variety of topics. How-ever, users tend to connect with like-minded people and readagreeable information, a behavior that leads to group polar-ization. Motivated by this scenario, we study how to takeadvantage of partial homophily to suggest agreeable contentto users authored by people with opposite views on sensitiveissues. We introduce a paradigm to present a data portraitof users, in which their characterizing topics are visualizedand their corresponding tweets are displayed using an or-ganic design. Among their tweets we inject recommendedtweets from other people considering their views on sensitiveissues in addition to topical relevance, indirectly motivatingconnections between dissimilar people. To evaluate our ap-proach, we present a case study on Twitter about a sensitivetopic in Chile, where we estimate user stances for regularpeople and nd intermediary topics. We then evaluated ourdesign in a user study. We found that recommending topi-cally relevant content from authors with opposite views in abaseline interface had a negative emotional e ect. We sawthat our organic visualization design reverts that e ect. Wealso observed signi cant individual di erences linked to eval-uation of recommendations. Our results suggest that organicvisualization may revert the negative e ects of providingpotentially sensitive content.",
"The Internet gives individuals more choice in political news and information sources and more tools to filter out disagreeable information. Citing the preference described by selective exposure theory — people prefer information that supports their beliefs and avoid counter-attitudinal information — observers warn that people may use these tools to access only agreeable information and thus live in ideological echo chambers. We report on a field deployment of a browser extension that showed users feedback about the political lean of their weekly and all time reading behaviors. Compared to a control group, showing feedback led to a modest move toward balanced exposure, corresponding to 1-2 visits per week to ideologically opposing sites or 5-10 additional visits per week to centrist sites.",
"This study provides the first direct assessment of the extent to which citizens encounter news and opinion challenging their political views via mass media. The widely accepted conjecture that people refuse to hear the other side is based upon self-reports of media exposure, rather than direct observation of it. In light of this long-acknowledged limitation, I leverage unique data tracking partisanship as well as actual exposure to media collected 24 7 via passive tracking devices. Contrary to previous understandings, the vast majority of citizens consume predominately centrist information, while frequently encountering ideological programming challenging their views. In fact, the best predictor of how much conservative news you watch is how much liberal news you watch, regardless of partisanship. The demonstration of widespread exposure to diverse viewpoints challenges claims asserting that resistance to political influence occurs at the exposure stage of the persuasion process.",
"Are we the kind of creatures who are suited to govern ourselves through deliberation? We seek to answer one important component of this question: how do individuals respond to deliberation in groups with varying levels of disagreement? We use a natural experiment in which approximately 3000 individuals were divided into small groups composed of about 8-10 persons. These groups deliberated for one day about health care reform in California. We demonstrate that there is a non-monotonic effect of disagreement upon deliberative quality. Elements of deliberative quality include mutual respect, understanding, proferring of reasons and arguments, equal opportunity for discursive engagement, and neutrality. Deliberative quality is maximized at moderate levels of disagreement and lower at high levels of ideological agreement or disagreement. Furthermore, individuals exhibit higher levels of persuasion in deliberative contexts of moderate disagreement. These findings support the view that many individuals have elements of a political psychology that is well suited for deliberation. They do not recoil when encountering disagreement nor do they especially value deliberating with those who see the world in very similar ways. Instead, they regard as most successful deliberations with moderate levels of difference -- perhaps those in which they were acquire new information, perspectives, or reasons. Beyond our substantive finding, this paper offers a methodological template for experimental studies of deliberation.",
"ABSTRACTThis article distinguishes nine senses of polarization and provides formal measures for each one to refine the methodology used to describe polarization in distributions of attitudes. Each distinct concept is explained through a definition, formal measures, examples, and references. We then apply these measures to GSS data regarding political views, opinions on abortion, and religiosity—topics described as revealing social polarization. Previous breakdowns of polarization include domain-specific assumptions and focus on a subset of the distribution’s features. This has conflated multiple, independent features of attitude distributions. The current work aims to extract the distinct senses of polarization and demonstrate that by becoming clearer on these distinctions we can better focus our efforts on substantive issues in social phenomena."
]
} |
1507.05348 | 2950167387 | The design of complexity-aware cascaded detectors, combining features of very different complexities, is considered. A new cascade design procedure is introduced, by formulating cascade learning as the Lagrangian optimization of a risk that accounts for both accuracy and complexity. A boosting algorithm, denoted as complexity aware cascade training (CompACT), is then derived to solve this optimization. CompACT cascades are shown to seek an optimal trade-off between accuracy and complexity by pushing features of higher complexity to the later cascade stages, where only a few difficult candidate patches remain to be classified. This enables the use of features of vastly different complexities in a single detector. In result, the feature pool can be expanded to features previously impractical for cascade design, such as the responses of a deep convolutional neural network (CNN). This is demonstrated through the design of a pedestrian detector with a pool of features whose complexities span orders of magnitude. The resulting cascade generalizes the combination of a CNN with an object proposal mechanism: rather than a pre-processing stage, CompACT cascades seamlessly integrate CNNs in their stages. This enables state of the art performance on the Caltech and KITTI datasets, at fairly fast speeds. | Detector cascades learned with boosting are commonly used for detecting template-like objects, e.g. faces @cite_39 @cite_29 @cite_42 @cite_27 , pedestrians @cite_13 @cite_14 , or cars @cite_45 . Early approaches used heuristics to find a cascade configuration of good trade-off between classification accuracy and complexity @cite_39 @cite_29 @cite_42 @cite_27 . More recently, optimization of the accuracy-complexity trade-off has started to receive attention @cite_40 @cite_14 @cite_45 @cite_0 . @cite_44 empirically added a complexity term to the objective function of RealBoost. @cite_40 @cite_14 @cite_45 introduced the Lagrangian formulation that we adopt, but use a single feature family throughout the cascade. Since early cascades stages must be very efficient, this implies adopting simple weak learners, e.g. decision stumps. | {
"cite_N": [
"@cite_14",
"@cite_29",
"@cite_42",
"@cite_39",
"@cite_0",
"@cite_44",
"@cite_27",
"@cite_45",
"@cite_40",
"@cite_13"
],
"mid": [
"",
"2170110077",
"",
"2137401668",
"",
"2162131940",
"2152352735",
"2101217650",
"2122352046",
"2125556102"
],
"abstract": [
"",
"We describe a method for training object detectors using a generalization of the cascade architecture, which results in a detection rate and speed comparable to that of the best published detectors while allowing for easier training and a detector with fewer features. In addition, the method allows for quickly calibrating the detector for a target detection rate, false positive rate or speed. One important advantage of our method is that it enables systematic exploration of the ROC surface, which characterizes the trade-off between accuracy and speed for a given classifier.",
"",
"This paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a simple and efficient classifier which is built using the AdaBoost learning algorithm (Freund and Schapire, 1995) to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions. A set of experiments in the domain of face detection is presented. The system yields face detection performance comparable to the best previous systems (Sung and Poggio, 1998; , 1998; Schneiderman and Kanade, 2000; , 2000). Implemented on a conventional desktop, face detection proceeds at 15 frames per second.",
"",
"This paper presents a fast method for detecting multi-view cars in real-world scenes. Cars are artificial objects with various appearance changes, but they have relatively consistent characteristics in structure that consist of some basic local elements. Inspired by this, we propose a novel set of image strip features to describe the appearances of those elements. The new features represent various types of lines and arcs with edge-like and ridge-like strip patterns, which significantly enrich the simple features such as haar-like features and edgelet features. They can also be calculated efficiently using the integral image. Moreover, we develop a new complexity-aware criterion for RealBoost algorithm to balance the discriminative capability and efficiency of the selected features. The experimental results on widely used single view and multi-view car datasets show that our approach is fast and has good performance.",
"In this paper, we propose a novel method, called \"dynamic cascade\", for training an efficient face detector on massive data sets. There are three key contributions. The first is a new cascade algorithm called \"dynamic cascade \", which can train cascade classifiers on massive data sets and only requires a small number of training parameters. The second is the introduction of a new kind of weak classifier, called \"Bayesian stump\", for training boost classifiers. It produces more stable boost classifiers with fewer features. Moreover, we propose a strategy for using our dynamic cascade algorithm with multiple sets of features to further improve the detection performance without significant increase in the detector's computational cost. Experimental results show that all the new techniques effectively improve the detection performance. Finally, we provide the first large standard data set for face detection, so that future researches on the topic can be compared on the same training and testing set.",
"The problem of learning classifier cascades is considered. A new cascade boosting algorithm, fast cascade boosting (FCBoost), is proposed. FCBoost is shown to have a number of interesting properties, namely that it 1) minimizes a Lagrangian risk that jointly accounts for classification accuracy and speed, 2) generalizes adaboost, 3) can be made cost-sensitive to support the design of high detection rate cascades, and 4) is compatible with many predictor structures suitable for sequential decision making. It is shown that a rich family of such structures can be derived recursively from cascade predictors of two stages, denoted cascade generators. Generators are then proposed for two new cascade families, last-stage and multiplicative cascades, that generalize the two most popular cascade architectures in the literature. The concept of neutral predictors is finally introduced, enabling FCBoost to automatically determine the cascade configuration, i.e., number of stages and number of weak learners per stage, for the learned cascades. Experiments on face and pedestrian detection show that the resulting cascades outperform current state-of-the-art methods in both detection accuracy and speed.",
"A new strategy is proposed for the design of cascaded object detectors of high detection-rate. The problem of jointly minimizing the false-positive rate and classification complexity of a cascade, given a constraint on its detection rate, is considered. It is shown that it reduces to the problem of minimizing false-positive rate given detection- rate and is, therefore, an instance of the classic problem of cost-sensitive learning. A cost-sensitive extension of boosting, denoted by asymmetric boosting, is introduced. It maintains a high detection-rate across the boosting iterations, and allows the design of cascaded detectors of high overall detection-rate. Experimental evaluation shows that, when compared to previous cascade design algorithms, the cascades produced by asymmetric boosting achieve significantly higher detection-rates, at the cost of a marginal increase in computation.",
"Multi-resolution image features may be approximated via extrapolation from nearby scales, rather than being computed explicitly. This fundamental insight allows us to design object detection algorithms that are as accurate, and considerably faster, than the state-of-the-art. The computational bottleneck of many modern detectors is the computation of features at every scale of a finely-sampled image pyramid. Our key insight is that one may compute finely sampled feature pyramids at a fraction of the cost, without sacrificing performance: for a broad family of features we find that features computed at octave-spaced scale intervals are sufficient to approximate features on a finely-sampled pyramid. Extrapolation is inexpensive as compared to direct feature computation. As a result, our approximation yields considerable speedups with negligible loss in detection accuracy. We modify three diverse visual recognition systems to use fast feature pyramids and show results on both pedestrian detection (measured on the Caltech, INRIA, TUD-Brussels and ETH data sets) and general object detection (measured on the PASCAL VOC). The approach is general and is widely applicable to vision algorithms requiring fine-grained multi-scale analysis. Our approximation is valid for images with broad spectra (most natural images) and fails for images with narrow band-pass spectra (e.g., periodic textures)."
]
} |
1507.05348 | 2950167387 | The design of complexity-aware cascaded detectors, combining features of very different complexities, is considered. A new cascade design procedure is introduced, by formulating cascade learning as the Lagrangian optimization of a risk that accounts for both accuracy and complexity. A boosting algorithm, denoted as complexity aware cascade training (CompACT), is then derived to solve this optimization. CompACT cascades are shown to seek an optimal trade-off between accuracy and complexity by pushing features of higher complexity to the later cascade stages, where only a few difficult candidate patches remain to be classified. This enables the use of features of vastly different complexities in a single detector. In result, the feature pool can be expanded to features previously impractical for cascade design, such as the responses of a deep convolutional neural network (CNN). This is demonstrated through the design of a pedestrian detector with a pool of features whose complexities span orders of magnitude. The resulting cascade generalizes the combination of a CNN with an object proposal mechanism: rather than a pre-processing stage, CompACT cascades seamlessly integrate CNNs in their stages. This enables state of the art performance on the Caltech and KITTI datasets, at fairly fast speeds. | This has motivated extensive work on the design of efficient features. For pedestrian detection, the integral channel features of @cite_25 have recently become popular. They extend the Haar-like features of @cite_39 into a set of color and histogram-of-gradients (HOG) channels. More recently, a computationally efficient version of @cite_39 , denoted the aggregate channel features (ACF), has been introduced in @cite_13 . @cite_43 complemented ACF with local binary patterns (LBP) and covariance features, for better detection accuracy. | {
"cite_N": [
"@cite_43",
"@cite_13",
"@cite_25",
"@cite_39"
],
"mid": [
"2098064689",
"2125556102",
"2159386181",
"2137401668"
],
"abstract": [
"We propose a simple yet effective approach to the problem of pedestrian detection which outperforms the current state-of-the-art. Our new features are built on the basis of low-level visual features and spatial pooling. Incorporating spatial pooling improves the translational invariance and thus the robustness of the detection process. We then directly optimise the partial area under the ROC curve (pAUC) measure, which concentrates detection performance in the range of most practical importance. The combination of these factors leads to a pedestrian detector which outperforms all competitors on all of the standard benchmark datasets. We advance state-of-the-art results by lowering the average miss rate from 13 to 11 on the INRIA benchmark, 41 to 37 on the ETH benchmark, 51 to 42 on the TUD-Brussels benchmark and 36 to 29 on the Caltech-USA benchmark.",
"Multi-resolution image features may be approximated via extrapolation from nearby scales, rather than being computed explicitly. This fundamental insight allows us to design object detection algorithms that are as accurate, and considerably faster, than the state-of-the-art. The computational bottleneck of many modern detectors is the computation of features at every scale of a finely-sampled image pyramid. Our key insight is that one may compute finely sampled feature pyramids at a fraction of the cost, without sacrificing performance: for a broad family of features we find that features computed at octave-spaced scale intervals are sufficient to approximate features on a finely-sampled pyramid. Extrapolation is inexpensive as compared to direct feature computation. As a result, our approximation yields considerable speedups with negligible loss in detection accuracy. We modify three diverse visual recognition systems to use fast feature pyramids and show results on both pedestrian detection (measured on the Caltech, INRIA, TUD-Brussels and ETH data sets) and general object detection (measured on the PASCAL VOC). The approach is general and is widely applicable to vision algorithms requiring fine-grained multi-scale analysis. Our approximation is valid for images with broad spectra (most natural images) and fails for images with narrow band-pass spectra (e.g., periodic textures).",
"We study the performance of ‘integral channel features’ for image classification tasks, focusing in particular on pedestrian detection. The general idea behind integral channel features is that multiple registered image channels are computed using linear and non-linear transformations of the input image, and then features such as local sums, histograms, and Haar features and their various generalizations are efficiently computed using integral images. Such features have been used in recent literature for a variety of tasks – indeed, variations appear to have been invented independently multiple times. Although integral channel features have proven effective, little effort has been devoted to analyzing or optimizing the features themselves. In this work we present a unified view of the relevant work in this area and perform a detailed experimental evaluation. We demonstrate that when designed properly, integral channel features not only outperform other features including histogram of oriented gradient (HOG), they also (1) naturally integrate heterogeneous sources of information, (2) have few parameters and are insensitive to exact parameter settings, (3) allow for more accurate spatial localization during detection, and (4) result in fast detectors when coupled with cascade classifiers.",
"This paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a simple and efficient classifier which is built using the AdaBoost learning algorithm (Freund and Schapire, 1995) to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions. A set of experiments in the domain of face detection is presented. The system yields face detection performance comparable to the best previous systems (Sung and Poggio, 1998; , 1998; Schneiderman and Kanade, 2000; , 2000). Implemented on a conventional desktop, face detection proceeds at 15 frames per second."
]
} |
1507.05348 | 2950167387 | The design of complexity-aware cascaded detectors, combining features of very different complexities, is considered. A new cascade design procedure is introduced, by formulating cascade learning as the Lagrangian optimization of a risk that accounts for both accuracy and complexity. A boosting algorithm, denoted as complexity aware cascade training (CompACT), is then derived to solve this optimization. CompACT cascades are shown to seek an optimal trade-off between accuracy and complexity by pushing features of higher complexity to the later cascade stages, where only a few difficult candidate patches remain to be classified. This enables the use of features of vastly different complexities in a single detector. In result, the feature pool can be expanded to features previously impractical for cascade design, such as the responses of a deep convolutional neural network (CNN). This is demonstrated through the design of a pedestrian detector with a pool of features whose complexities span orders of magnitude. The resulting cascade generalizes the combination of a CNN with an object proposal mechanism: rather than a pre-processing stage, CompACT cascades seamlessly integrate CNNs in their stages. This enables state of the art performance on the Caltech and KITTI datasets, at fairly fast speeds. | Several works proposed alternative feature channels, obtained by convolving different filters with the original HOG+LUV channels @cite_18 @cite_9 @cite_33 @cite_10 . The SquaresChnFtrs of @cite_47 reduce the large number of features of @cite_25 @cite_39 to 16 box-like filters of various sizes. @cite_10 extended the locally decorrelated features of @cite_32 to ACF, learning four 5 @math 5 PCA-like filters from each of the ACF channels. Instead of empirical filter design, @cite_18 exploited prior knowledge about pedestrian shape to design informed filters. They later found, however, that such filters are actually not needed @cite_9 . Instead, the number of filters appears to be the most important variable: features as simple as checkerboard-like patterns, or purely random filters, can achieve very good performance, as long as there are enough of them. Although reached state-of-the-art performance has been achieved @cite_43 @cite_21 , they are relatively slow, due to the convolution computations with several hundred filters. | {
"cite_N": [
"@cite_18",
"@cite_33",
"@cite_9",
"@cite_21",
"@cite_32",
"@cite_39",
"@cite_43",
"@cite_47",
"@cite_10",
"@cite_25"
],
"mid": [
"2081021369",
"",
"2950561226",
"",
"",
"2137401668",
"2098064689",
"2034779469",
"2170101770",
"2159386181"
],
"abstract": [
"We propose a simple yet effective detector for pedestrian detection. The basic idea is to incorporate common sense and everyday knowledge into the design of simple and computationally efficient features. As pedestrians usually appear up-right in image or video data, the problem of pedestrian detection is considerably simpler than general purpose people detection. We therefore employ a statistical model of the up-right human body where the head, the upper body, and the lower body are treated as three distinct components. Our main contribution is to systematically design a pool of rectangular templates that are tailored to this shape model. As we incorporate different kinds of low-level measurements, the resulting multi-modal & multi-channel Haar-like features represent characteristic differences between parts of the human body yet are robust against variations in clothing or environmental settings. Our approach avoids exhaustive searches over all possible configurations of rectangle features and neither relies on random sampling. It thus marks a middle ground among recently published techniques and yields efficient low-dimensional yet highly discriminative features. Experimental results on the INRIA and Caltech pedestrian datasets show that our detector reaches state-of-the-art performance at low computational costs and that our features are robust against occlusions.",
"",
"This paper starts from the observation that multiple top performing pedestrian detectors can be modelled by using an intermediate layer filtering low-level features in combination with a boosted decision forest. Based on this observation we propose a unifying framework and experimentally explore different filter families. We report extensive results enabling a systematic analysis. Using filtered channel features we obtain top performance on the challenging Caltech and KITTI datasets, while using only HOG+LUV as low-level features. When adding optical flow features we further improve detection quality and report the best known results on the Caltech dataset, reaching 93 recall at 1 FPPI.",
"",
"",
"This paper describes a face detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a simple and efficient classifier which is built using the AdaBoost learning algorithm (Freund and Schapire, 1995) to select a small number of critical visual features from a very large set of potential features. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising face-like regions. A set of experiments in the domain of face detection is presented. The system yields face detection performance comparable to the best previous systems (Sung and Poggio, 1998; , 1998; Schneiderman and Kanade, 2000; , 2000). Implemented on a conventional desktop, face detection proceeds at 15 frames per second.",
"We propose a simple yet effective approach to the problem of pedestrian detection which outperforms the current state-of-the-art. Our new features are built on the basis of low-level visual features and spatial pooling. Incorporating spatial pooling improves the translational invariance and thus the robustness of the detection process. We then directly optimise the partial area under the ROC curve (pAUC) measure, which concentrates detection performance in the range of most practical importance. The combination of these factors leads to a pedestrian detector which outperforms all competitors on all of the standard benchmark datasets. We advance state-of-the-art results by lowering the average miss rate from 13 to 11 on the INRIA benchmark, 41 to 37 on the ETH benchmark, 51 to 42 on the TUD-Brussels benchmark and 36 to 29 on the Caltech-USA benchmark.",
"The current state of the art solutions for object detection describe each class by a set of models trained on discovered sub-classes (so called \"components\"), with each model itself composed of collections of interrelated parts (deformable models). These detectors build upon the now classic Histogram of Oriented Gradients+linear SVM combo. In this paper we revisit some of the core assumptions in HOG+SVM and show that by properly designing the feature pooling, feature selection, preprocessing, and training methods, it is possible to reach top quality, at least for pedestrian detections, using a single rigid component. Abstract We provide experiments for a large design space, that give insights into the design of classifiers, as well as relevant information for practitioners. Our best detector is fully feed-forward, has a single unified architecture, uses only histograms of oriented gradients and colour information in monocular static images, and improves over 23 other methods on the INRIA, ETH and Caltech-USA datasets, reducing the average miss-rate over HOG+SVM by more than 30 .",
"Even with the advent of more sophisticated, data-hungry methods, boosted decision trees remain extraordinarily successful for fast rigid object detection, achieving top accuracy on numerous datasets. While effective, most boosted detectors use decision trees with orthogonal (single feature) splits, and the topology of the resulting decision boundary may not be well matched to the natural topology of the data. Given highly correlated data, decision trees with oblique (multiple feature) splits can be effective. Use of oblique splits, however, comes at considerable computational expense. Inspired by recent work on discriminative decorrelation of HOG features, we instead propose an efficient feature transform that removes correlations in local neighborhoods. The result is an overcomplete but locally decorrelated representation ideally suited for use with orthogonal decision trees. In fact, orthogonal trees with our locally decorrelated features outperform oblique trees trained over the original features at a fraction of the computational cost. The overall improvement in accuracy is dramatic: on the Caltech Pedestrian Dataset, we reduce false positives nearly tenfold over the previous state-of-the-art.",
"We study the performance of ‘integral channel features’ for image classification tasks, focusing in particular on pedestrian detection. The general idea behind integral channel features is that multiple registered image channels are computed using linear and non-linear transformations of the input image, and then features such as local sums, histograms, and Haar features and their various generalizations are efficiently computed using integral images. Such features have been used in recent literature for a variety of tasks – indeed, variations appear to have been invented independently multiple times. Although integral channel features have proven effective, little effort has been devoted to analyzing or optimizing the features themselves. In this work we present a unified view of the relevant work in this area and perform a detailed experimental evaluation. We demonstrate that when designed properly, integral channel features not only outperform other features including histogram of oriented gradient (HOG), they also (1) naturally integrate heterogeneous sources of information, (2) have few parameters and are insensitive to exact parameter settings, (3) allow for more accurate spatial localization during detection, and (4) result in fast detectors when coupled with cascade classifiers."
]
} |
1507.05348 | 2950167387 | The design of complexity-aware cascaded detectors, combining features of very different complexities, is considered. A new cascade design procedure is introduced, by formulating cascade learning as the Lagrangian optimization of a risk that accounts for both accuracy and complexity. A boosting algorithm, denoted as complexity aware cascade training (CompACT), is then derived to solve this optimization. CompACT cascades are shown to seek an optimal trade-off between accuracy and complexity by pushing features of higher complexity to the later cascade stages, where only a few difficult candidate patches remain to be classified. This enables the use of features of vastly different complexities in a single detector. In result, the feature pool can be expanded to features previously impractical for cascade design, such as the responses of a deep convolutional neural network (CNN). This is demonstrated through the design of a pedestrian detector with a pool of features whose complexities span orders of magnitude. The resulting cascade generalizes the combination of a CNN with an object proposal mechanism: rather than a pre-processing stage, CompACT cascades seamlessly integrate CNNs in their stages. This enables state of the art performance on the Caltech and KITTI datasets, at fairly fast speeds. | While deep convolutional learning classifiers have achieved impressive results for general object detection @cite_38 @cite_23 , e.g. on VOC2007 or ImageNet, they have not excelled on pedestrian detection @cite_7 @cite_4 . Benchmarks like Caltech @cite_15 are still dominated by classical handcrafted features (see e.g. a recent comprehensive evaluation of pedestrian detectors by @cite_26 ). Recently, @cite_19 transferred the R-CNN framework to the pedestrian detection task, showing some improvement over previous deep learning detectors @cite_7 @cite_4 . However, the gap to the state of the art is still significant. Deep models also tend to be too heavy for sliding window detection. This is usually addressed with object proposal mechanisms @cite_38 @cite_20 @cite_6 that pre-select the most promising image patches. This two-stage decomposition (proposal generation and classification) is a simple cascade mechanism. In this work, we consider the seamless combination of these two stages into a cascade explicitly designed to account for both accuracy and complexity, so as to achieve detectors that are both highly accurate and fast. | {
"cite_N": [
"@cite_38",
"@cite_26",
"@cite_4",
"@cite_7",
"@cite_6",
"@cite_19",
"@cite_23",
"@cite_15",
"@cite_20"
],
"mid": [
"2102605133",
"1650122911",
"2156547346",
"2949966521",
"",
"2949493420",
"2179352600",
"2031454541",
""
],
"abstract": [
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"Paper-by-paper results make it easy to miss the forest for the trees.We analyse the remarkable progress of the last decade by dis- cussing the main ideas explored in the 40+ detectors currently present in the Caltech pedestrian detection benchmark. We observe that there exist three families of approaches, all currently reaching similar detec- tion quality. Based on our analysis, we study the complementarity of the most promising ideas by combining multiple published strategies. This new decision forest detector achieves the current best known performance on the challenging Caltech-USA dataset.",
"Feature extraction, deformation handling, occlusion handling, and classification are four important components in pedestrian detection. Existing methods learn or design these components either individually or sequentially. The interaction among these components is not yet well explored. This paper proposes that they should be jointly learned in order to maximize their strengths through cooperation. We formulate these four components into a joint deep learning framework and propose a new deep network architecture. By establishing automatic, mutual interaction among components, the deep model achieves a 9 reduction in the average miss rate compared with the current best-performing pedestrian detection approaches on the largest Caltech benchmark dataset.",
"Pedestrian detection is a problem of considerable practical interest. Adding to the list of successful applications of deep learning methods to vision, we report state-of-the-art and competitive results on all major pedestrian datasets with a convolutional network model. The model uses a few new twists, such as multi-stage features, connections that skip layers to integrate global shape information with local distinctive motif information, and an unsupervised method based on convolutional sparse coding to pre-train the filters at each stage.",
"",
"In this paper we study the use of convolutional neural networks (convnets) for the task of pedestrian detection. Despite their recent diverse successes, convnets historically underperform compared to other pedestrian detectors. We deliberately omit explicitly modelling the problem into the network (e.g. parts or occlusion modelling) and show that we can reach competitive performance without bells and whistles. In a wide range of experiments we analyse small and big convnets, their architectural choices, parameters, and the influence of different training data, including pre-training on surrogate tasks. We present the best convnet detectors on the Caltech and KITTI dataset. On Caltech our convnets reach top performance both for the Caltech1x and Caltech10x training setup. Using additional data at training time our strongest convnet model is competitive even to detectors that use additional data (optical flow) at test time.",
"Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101.",
"Pedestrian detection is a key problem in computer vision, with several applications that have the potential to positively impact quality of life. In recent years, the number of approaches to detecting pedestrians in monocular images has grown steadily. However, multiple data sets and widely varying evaluation protocols are used, making direct comparisons difficult. To address these shortcomings, we perform an extensive evaluation of the state of the art in a unified framework. We make three primary contributions: 1) We put together a large, well-annotated, and realistic monocular pedestrian detection data set and study the statistics of the size, position, and occlusion patterns of pedestrians in urban scenes, 2) we propose a refined per-frame evaluation methodology that allows us to carry out probing and informative comparisons, including measuring performance in relation to scale and occlusion, and 3) we evaluate the performance of sixteen pretrained state-of-the-art detectors across six data sets. Our study allows us to assess the state of the art and provides a framework for gauging future efforts. Our experiments show that despite significant progress, performance still has much room for improvement. In particular, detection is disappointing at low resolutions and for partially occluded pedestrians.",
""
]
} |
1507.04739 | 2144865572 | We give a sharp lower bound on the number of matchings of a given size in a bipartite graph. When specialized to regular bipartite graphs, our results imply Friedland's Lower Matching Conjecture and Schrijver's theorem proven by Gurvits and Csikvari. Indeed, our work extends the recent work of Csikvari done for regular and bi-regular bipartite graphs. Moreover, our lower bounds are order optimal as they are attained for a sequence of @math -lifts of the original graph as well as for random @math -lifts of the original graph when @math tends to infinity. We then extend our results to permanents and subpermanents sums. For permanents, we are able to recover the lower bound of Schrijver recently proved by Gurvits using stable polynomials. Our proof is algorithmic and borrows ideas from the theory of local weak convergence of graphs, statistical physics and covers of graphs. We provide new lower bounds for subpermanents sums and obtain new results on the number of matching in random @math -lifts with some implications for the matching measure and the spectral measure of random @math -lifts as well as for the spectral measure of infinite trees. | Recall that @math is called the partition function. A crucial observation made by Csikv 'ari @cite_48 is that for any @math -lift @math of a bipartite graph @math , we have @math , see Proposition in the sequel. Note that a @math -lift of a @math -regular graph is still @math -regular. Starting form any @math -regular bipartite graph @math , Csikv 'ari builds a sequence of @math -lifts with increasing girth. Then Csikv 'ari uses the framework of local weak convergence in order to define a limiting graph for the sequence of @math -lifts. In this particular case, the limit is the infinite @math -regular tree. Using the connection between the local weak convergence and the matching measure of a graph developed in @cite_31 , Csikv 'ari computes a limiting partition function associated to this infinite @math -regular tree which is obtained from the Kesten-McKay measure. This limiting partition function is then a lower bound for the original partition function @math and the lower bounds ) and ) are then easily obtained by properly choosing the parameter @math as a function of @math the size of the matchings that we need to count. | {
"cite_N": [
"@cite_48",
"@cite_31"
],
"mid": [
"2128943288",
"1574882396"
],
"abstract": [
"Friedland's Lower Matching Conjecture asserts that if @math is a @math --regular bipartite graph on @math vertices, and @math denotes the number of matchings of size @math , then @math where @math . When @math , this conjecture reduces to a theorem of Schrijver which says that a @math --regular bipartite graph on @math vertices has at least @math perfect matchings. L. Gurvits proved an asymptotic version of the Lower Matching Conjecture, namely he proved that @math In this paper, we prove the Lower Matching Conjecture. In fact, we will prove a slightly stronger statement which gives an extra @math factor compared to the conjecture if @math is separated away from @math and @math , and is tight up to a constant factor if @math is separated away from @math . We will also give a new proof of Gurvits's and Schrijver's theorems, and we extend these theorems to @math --biregular bipartite graphs.",
"We define the matching measure of a lattice L as the spectral measure of the tree of self-avoiding walks in L. We connect this invariant to the monomer–dimer partition function of a sequence of finite graphs converging to L. This allows us to express the monomer–dimer free energy of L in terms of the matching measure. Exploiting an analytic advantage of the matching measure over the Mayer series then leads to new, rigorous bounds on the monomer–dimer free energies of various Euclidean lattices. While our estimates use only the computational data given in previous papers, they improve the known bounds significantly."
]
} |
1507.04739 | 2144865572 | We give a sharp lower bound on the number of matchings of a given size in a bipartite graph. When specialized to regular bipartite graphs, our results imply Friedland's Lower Matching Conjecture and Schrijver's theorem proven by Gurvits and Csikvari. Indeed, our work extends the recent work of Csikvari done for regular and bi-regular bipartite graphs. Moreover, our lower bounds are order optimal as they are attained for a sequence of @math -lifts of the original graph as well as for random @math -lifts of the original graph when @math tends to infinity. We then extend our results to permanents and subpermanents sums. For permanents, we are able to recover the lower bound of Schrijver recently proved by Gurvits using stable polynomials. Our proof is algorithmic and borrows ideas from the theory of local weak convergence of graphs, statistical physics and covers of graphs. We provide new lower bounds for subpermanents sums and obtain new results on the number of matching in random @math -lifts with some implications for the matching measure and the spectral measure of random @math -lifts as well as for the spectral measure of infinite trees. | As explained in the sequel of this section, all the basic ideas used in our proofs were present in a form or another in the literature: using lifts for extremal graph theory was one the main motivation for their introduction in a series of papers by Amit, Linial, Matousek, Rozenman and Bilu @cite_45 @cite_5 @cite_14 @cite_35 ; the matching measure and the local recursions already appeared in the seminal work of Heilmann and Lieb @cite_13 ; the function @math defined in ) is known in statistical physics as the Bethe entropy @cite_36 . The main contribution of this paper is a conceptual message showing how known techniques from interdisciplinary areas can lead to new applications in theoretical computer science. In the next subsections, we will try to relate our results to the existing literature and give credit to the many authors who inspired our work. | {
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_36",
"@cite_45",
"@cite_5",
"@cite_13"
],
"mid": [
"2962992640",
"1971907125",
"2033718639",
"1974893148",
"1998836349",
"2037389370"
],
"abstract": [
"Let G be a graph on n vertices. A 2-lift of G is a graph H on 2n vertices, with a covering map � : H → G. It is not hard to see that all eigenvalues of G are also eigenvalues of H. In addition, H has n “new” eigenvalues. We conjecture that every d-regular graph has a 2lift such that all new eigenvalues are in the range [−2 √ d − 1,2 √ d − 1] (If true, this is tight , e.g. by the Alon-Boppana bound). Here we show that every graph of maximal degree d has a 2-lift such that all",
"For a graph G, a random n-lift of G has the vertex set V(G) × [n] and for each edge [u, v] ∈ E(G), there is a random matching between u × [n] and v × [n]. We present bounds on the chromatic number and on the independence number of typical random lifts, with G fixed and n → ∞. For the independence number, upper and lower bounds are obtained as solutions to certain optimization problems on the base graph. For a base graph G with chromatic number χ and fractional chromatic number χf, we show that the chromatic number of typical lifts is bounded from below by const. √χ log χ and also by const. χf log2 χf (trivially, it is bounded by χ from above). We have examples of graphs where the chromatic number of the lift equals χ almost surely, and others where it is a.s. O(χ log χ). Many interesting problems remain open.",
"We study matchings on sparse random graphs by means of the cavity method. We first show how the method reproduces several known results about maximum and perfect matchings in regular and Erd?s?R?nyi random graphs. Our main new result is the computation of the entropy, i.e.?the leading order of the logarithm of the number of solutions, of matchings with a given size. We derive both an algorithm to compute this entropy for an arbitrary graph with a girth that diverges in the large size limit, and an analytic result for the entropy in regular and Erd?s?R?nyi random graph ensembles.",
"In this paper we describe a simple model for random graphs that have an n-fold covering map onto a fixed finite base graph. Roughly, given a base graph G and an integer n ,w e form a random graphby replacing eachvertex of G by a set of n vertices, and joining these sets by random matchings whenever the corresponding vertices are adjacent in G .T h e resulting graph covers the original graph in the sense that the two are locally isomorphic. We suggest possible applications of the model, such as constructing graphs with extremal properties in a more controlled fashion than offered by the standard random models, and also “randomizing” given graphs. The main specific result that we prove here (Theorem 1) is that if δ ≥ 3 is the smallest vertex degree in G, then almost all n-covers of G are δconnected. In subsequent papers we will address oth er graphproperties, suchas girth , expansion and chromatic number.",
"We study random lifts of a graph G as defined in [1]. We prove a 0-1 law which states that for every graph G either almost every lift of G has a perfect matching, or almost none of its lifts has a perfect matching. We provide a precise description of this dichotomy. Roughly speaking, the a.s. existence of a perfect matching in the lift depends on the existence of a fractional perfect matching in G. The precise statement appears in Theorem 1.",
"We investigate the general monomer-dimer partition function,P(x), which is a polynomial in the monomer activity,x, with coefficients depending on the dimer activities. Our main result is thatP(x) has its zeros on the imaginary axis when the dimer activities are nonnegative. Therefore, no monomer-dimer system can have a phase transition as a function of monomer density except, possibly, when the monomer density is minimal (i.e.x=0). Elaborating on this theme we prove the existence and analyticity of correlation functions (away fromx=0) in the thermodynamic limit. Among other things we obtain bounds on the compressibility and derive a new variable in which to make an expansion of the free energy that converges down to the minimal monomer density. We also relate the monomer-dimer problem to the Heisenberg and Ising models of a magnet and derive Christoffell-Darboux formulas for the monomer-dimer and Ising model partition functions. This casts the Ising model in a new light and provides an alternative proof of the Lee-Yang circle theorem. We also derive joint complex analyticity domains in the monomer and dimer activities. Our considerations are independent of geometry and hence are valid for any dimensionality."
]
} |
1507.04739 | 2144865572 | We give a sharp lower bound on the number of matchings of a given size in a bipartite graph. When specialized to regular bipartite graphs, our results imply Friedland's Lower Matching Conjecture and Schrijver's theorem proven by Gurvits and Csikvari. Indeed, our work extends the recent work of Csikvari done for regular and bi-regular bipartite graphs. Moreover, our lower bounds are order optimal as they are attained for a sequence of @math -lifts of the original graph as well as for random @math -lifts of the original graph when @math tends to infinity. We then extend our results to permanents and subpermanents sums. For permanents, we are able to recover the lower bound of Schrijver recently proved by Gurvits using stable polynomials. Our proof is algorithmic and borrows ideas from the theory of local weak convergence of graphs, statistical physics and covers of graphs. We provide new lower bounds for subpermanents sums and obtain new results on the number of matching in random @math -lifts with some implications for the matching measure and the spectral measure of random @math -lifts as well as for the spectral measure of infinite trees. | At this stage, we should recall that computing the number of matchings falls into the class of @math -complete problems as well as the problem of counting the number of perfect matchings in a given bipartite graph, i.e. computing the permanent of an arbitrary @math matrix. By previous discussion, we see that if the graph is locally tree like, then the tree of self-avoiding paths and the universal cover are locally the same, and one can believe that our algorithm will compute a good approximation for counting matchings. This idea was formalized in @cite_36 and proved rigorously in @cite_38 for random graphs. Our Theorem shows that these results extend to random lifts. The lower bound in ) is called the (logarithm of the) Bethe permanent in the physics literature @cite_16 @cite_22 @cite_24 . Similar ideas using lifts or covers of graphs have appeared in the literature about message passing algorithms, see @cite_28 @cite_12 and references therein. We refer to @cite_26 for more results connecting Belief Propagation with our setting. | {
"cite_N": [
"@cite_38",
"@cite_26",
"@cite_22",
"@cite_36",
"@cite_28",
"@cite_24",
"@cite_16",
"@cite_12"
],
"mid": [
"",
"1653950878",
"2963320833",
"2033718639",
"2132806676",
"2761017552",
"1801773550",
""
],
"abstract": [
"",
"For the minimum cardinality vertex cover and maximum cardinality matching problems, the max-product form of belief propagation (BP) is known to perform poorly on general graphs. In this paper, we present an iterative loopy annealing BP (LABP) algorithm which is shown to converge and to solve a Linear Programming relaxation of the vertex cover or matching problem on general graphs. LABP finds (asymptotically) a minimum half-integral vertex cover (hence provides a 2-approximation) and a maximum fractional matching on any graph. We also show that LABP finds (asymptotically) a minimum size vertex cover for any bipartite graph and as a consequence compute the matching number of the graph. Our proof relies on some subtle monotonicity arguments for the local iteration. We also show that the Bethe free entropy is concave and that LABP maximizes it. Using loop calculus, we also give an exact (also intractable for general graphs) expression of the partition function for matching in term of the LABP messages which can be used to improve mean-field approximations.",
"We discuss schemes for exact and approximate computations of permanents, and compare them with each other. Specifically, we analyze the belief propagation (BP) approach and its fractional belief propagation (FBP) generalization for computing the permanent of a non-negative matrix. Known bounds and Conjectures are verified in experiments, and some new theoretical relations, bounds and Conjectures are proposed. The fractional free energy (FFE) function is parameterized by a scalar parameter γ ∈ [-1;1], where γ = -1 corresponds to the BP limit and γ = 1 corresponds to the exclusion principle (but ignoring perfect matching constraints) mean-field (MF) limit. FFE shows monotonicity and continuity with respect to γ. For every non-negative matrix, we define its special value γ* ∈ [-1;0] to be the γ for which the minimum of the γ-parameterized FFE function is equal to the permanent of the matrix, where the lower and upper bounds of the γ-interval corresponds to respective bounds for the permanent. Our experimental analysis suggests that the distribution of γ* varies for different ensembles but γ* always lies within the [-1;-1 2] interval. Moreover, for all ensembles considered, the behavior of γ* is highly distinctive, offering an empirical practical guidance for estimating permanents of non-negative matrices via the FFE approach.",
"We study matchings on sparse random graphs by means of the cavity method. We first show how the method reproduces several known results about maximum and perfect matchings in regular and Erd?s?R?nyi random graphs. Our main new result is the computation of the entropy, i.e.?the leading order of the logarithm of the number of solutions, of matchings with a given size. We derive both an algorithm to compute this entropy for an arbitrary graph with a girth that diverges in the large size limit, and an analytic result for the entropy in regular and Erd?s?R?nyi random graph ensembles.",
"Sudderth, Wainwright, and Willsky have conjectured that the Bethe approximation corresponding to any fixed point of the belief propagation algorithm over an attractive, pairwise binary graphical model provides a lower bound on the true partition function. In this work, we resolve this conjecture in the affirmative by demonstrating that, for any graphical model with binary variables whose potential functions (not necessarily pairwise) are all log-supermodular, the Bethe partition function always lower bounds the true partition function. The proof of this result follows from a new variant of the \"four functions\" theorem that may be of independent interest.",
"It has recently been observed that the permanent of a nonnegative square matrix, i.e., of a square matrix containing only nonnegative real entries, can very well be approximated by solving a certain Bethe free energy function minimization problem with the help of the sum-product algorithm. We call the resulting approximation of the permanent the Bethe permanent. In this paper, we give reasons why this approach to approximating the permanent works well. Namely, we show that the Bethe free energy function is convex and that the sum-product algorithm finds its minimum efficiently. We then discuss the fact that the permanent is lower bounded by the Bethe permanent, and we comment on potential upper bounds on the permanent based on the Bethe permanent. We also present a combinatorial characterization of the Bethe permanent in terms of permanents of so-called lifted versions of the matrix under consideration. Moreover, we comment on possibilities to modify the Bethe permanent so that it approximates the permanent even better, and we conclude the paper with some observations and conjectures about permanent-based pseudocodewords and permanent-based kernels.",
"We consider computation of the permanent of a positive (N × N) non-negative matrix, P = (Pji|i, j = 1, ..., N), or equivalently the problem of weighted counting of the perfect matchings over the complete bipartite graph KN, N. The problem is known to be of likely exponential complexity. Stated as the partition function Z of a graphical model, the problem allows for exact loop calculus representation (Chertkov M and Chernyak V 2006 Phys. Rev. E 72 065102) in terms of an interior minimum of the Bethe free energy functional over non-integer doubly stochastic matrix of marginal beliefs, β = (βji|i, j = 1, ..., N), also correspondent to a fixed point of the iterative message-passing algorithm of the belief propagation (BP) type. Our main result is an explicit expression of the exact partition function (permanent) in terms of the matrix of BP marginals, β, as Z = Perm(P) = ZBP Perm(βji(1 − βji)) ∏i, j(1 − βji), where ZBP is the BP expression for the permanent stated explicitly in terms of β. We give two derivations of the formula, a direct one based on the Bethe free energy and an alternative one combining the Ihara graph-ζ function and the loop calculus approaches. Assuming that the matrix β of the BP marginals is calculated, we provide two lower bounds and one upper bound to estimate the multiplicative term. Two complementary lower bounds are based on the Gurvits–van der Waerden theorem and on a relation between the modified permanent and determinant, respectively.",
""
]
} |
1507.04739 | 2144865572 | We give a sharp lower bound on the number of matchings of a given size in a bipartite graph. When specialized to regular bipartite graphs, our results imply Friedland's Lower Matching Conjecture and Schrijver's theorem proven by Gurvits and Csikvari. Indeed, our work extends the recent work of Csikvari done for regular and bi-regular bipartite graphs. Moreover, our lower bounds are order optimal as they are attained for a sequence of @math -lifts of the original graph as well as for random @math -lifts of the original graph when @math tends to infinity. We then extend our results to permanents and subpermanents sums. For permanents, we are able to recover the lower bound of Schrijver recently proved by Gurvits using stable polynomials. Our proof is algorithmic and borrows ideas from the theory of local weak convergence of graphs, statistical physics and covers of graphs. We provide new lower bounds for subpermanents sums and obtain new results on the number of matching in random @math -lifts with some implications for the matching measure and the spectral measure of random @math -lifts as well as for the spectral measure of infinite trees. | We now relate our results to the matching measure used by Csikv 'ari in @cite_48 and show how our results allow us to compute spectral measure of infinite trees. The matching polynomial is defined as: Q_G(z) = k=0 ^ (G) (-1)^k m_k(G) z^ n-2k = z^nP_ G (-z^ -2 ). We define the matching measure of @math denoted by @math as the uniform distribution over the roots of the matching polynomial of @math : = 1 (G) i=1 ^ (G) z_i , where the @math 's are the roots of @math . Note that @math so that @math is symmetric. The fundamental theorem for the matching polynomial is the following. | {
"cite_N": [
"@cite_48"
],
"mid": [
"2128943288"
],
"abstract": [
"Friedland's Lower Matching Conjecture asserts that if @math is a @math --regular bipartite graph on @math vertices, and @math denotes the number of matchings of size @math , then @math where @math . When @math , this conjecture reduces to a theorem of Schrijver which says that a @math --regular bipartite graph on @math vertices has at least @math perfect matchings. L. Gurvits proved an asymptotic version of the Lower Matching Conjecture, namely he proved that @math In this paper, we prove the Lower Matching Conjecture. In fact, we will prove a slightly stronger statement which gives an extra @math factor compared to the conjecture if @math is separated away from @math and @math , and is tight up to a constant factor if @math is separated away from @math . We will also give a new proof of Gurvits's and Schrijver's theorems, and we extend these theorems to @math --biregular bipartite graphs."
]
} |
1507.04739 | 2144865572 | We give a sharp lower bound on the number of matchings of a given size in a bipartite graph. When specialized to regular bipartite graphs, our results imply Friedland's Lower Matching Conjecture and Schrijver's theorem proven by Gurvits and Csikvari. Indeed, our work extends the recent work of Csikvari done for regular and bi-regular bipartite graphs. Moreover, our lower bounds are order optimal as they are attained for a sequence of @math -lifts of the original graph as well as for random @math -lifts of the original graph when @math tends to infinity. We then extend our results to permanents and subpermanents sums. For permanents, we are able to recover the lower bound of Schrijver recently proved by Gurvits using stable polynomials. Our proof is algorithmic and borrows ideas from the theory of local weak convergence of graphs, statistical physics and covers of graphs. We provide new lower bounds for subpermanents sums and obtain new results on the number of matching in random @math -lifts with some implications for the matching measure and the spectral measure of random @math -lifts as well as for the spectral measure of infinite trees. | In particular, the matching measure of @math is a probability measure on @math . Of course, the polynomials @math or @math contains the same information as the matching measure @math . We can express the quantity of interest in term of @math (see Lemma 8.5 in @cite_13 , @cite_31 or @cite_48 ): for @math , 1 v(G) z P_G'(z) P_G(z) &=& 1 2 z ^2 1+z ^2 d ( ), (G) v(G) &=& 1 2 (1- ( 0 ) ). | {
"cite_N": [
"@cite_48",
"@cite_31",
"@cite_13"
],
"mid": [
"2128943288",
"1574882396",
"2037389370"
],
"abstract": [
"Friedland's Lower Matching Conjecture asserts that if @math is a @math --regular bipartite graph on @math vertices, and @math denotes the number of matchings of size @math , then @math where @math . When @math , this conjecture reduces to a theorem of Schrijver which says that a @math --regular bipartite graph on @math vertices has at least @math perfect matchings. L. Gurvits proved an asymptotic version of the Lower Matching Conjecture, namely he proved that @math In this paper, we prove the Lower Matching Conjecture. In fact, we will prove a slightly stronger statement which gives an extra @math factor compared to the conjecture if @math is separated away from @math and @math , and is tight up to a constant factor if @math is separated away from @math . We will also give a new proof of Gurvits's and Schrijver's theorems, and we extend these theorems to @math --biregular bipartite graphs.",
"We define the matching measure of a lattice L as the spectral measure of the tree of self-avoiding walks in L. We connect this invariant to the monomer–dimer partition function of a sequence of finite graphs converging to L. This allows us to express the monomer–dimer free energy of L in terms of the matching measure. Exploiting an analytic advantage of the matching measure over the Mayer series then leads to new, rigorous bounds on the monomer–dimer free energies of various Euclidean lattices. While our estimates use only the computational data given in previous papers, they improve the known bounds significantly.",
"We investigate the general monomer-dimer partition function,P(x), which is a polynomial in the monomer activity,x, with coefficients depending on the dimer activities. Our main result is thatP(x) has its zeros on the imaginary axis when the dimer activities are nonnegative. Therefore, no monomer-dimer system can have a phase transition as a function of monomer density except, possibly, when the monomer density is minimal (i.e.x=0). Elaborating on this theme we prove the existence and analyticity of correlation functions (away fromx=0) in the thermodynamic limit. Among other things we obtain bounds on the compressibility and derive a new variable in which to make an expansion of the free energy that converges down to the minimal monomer density. We also relate the monomer-dimer problem to the Heisenberg and Ising models of a magnet and derive Christoffell-Darboux formulas for the monomer-dimer and Ising model partition functions. This casts the Ising model in a new light and provides an alternative proof of the Lee-Yang circle theorem. We also derive joint complex analyticity domains in the monomer and dimer activities. Our considerations are independent of geometry and hence are valid for any dimensionality."
]
} |
1507.04739 | 2144865572 | We give a sharp lower bound on the number of matchings of a given size in a bipartite graph. When specialized to regular bipartite graphs, our results imply Friedland's Lower Matching Conjecture and Schrijver's theorem proven by Gurvits and Csikvari. Indeed, our work extends the recent work of Csikvari done for regular and bi-regular bipartite graphs. Moreover, our lower bounds are order optimal as they are attained for a sequence of @math -lifts of the original graph as well as for random @math -lifts of the original graph when @math tends to infinity. We then extend our results to permanents and subpermanents sums. For permanents, we are able to recover the lower bound of Schrijver recently proved by Gurvits using stable polynomials. Our proof is algorithmic and borrows ideas from the theory of local weak convergence of graphs, statistical physics and covers of graphs. We provide new lower bounds for subpermanents sums and obtain new results on the number of matching in random @math -lifts with some implications for the matching measure and the spectral measure of random @math -lifts as well as for the spectral measure of infinite trees. | As explained above, Csikv 'ari @cite_48 uses this representation and the fact that for a sequence of @math -regular graphs converging to a @math -regular tree, the limiting matching measure is given by the Kesten-MacKay measure, to get an explicit formula for the limiting partition function. Our approach relies on local recursions instead of the connection with the matching measure. Since we are able to solve these recursions, we get the following result for the limiting matching measure. | {
"cite_N": [
"@cite_48"
],
"mid": [
"2128943288"
],
"abstract": [
"Friedland's Lower Matching Conjecture asserts that if @math is a @math --regular bipartite graph on @math vertices, and @math denotes the number of matchings of size @math , then @math where @math . When @math , this conjecture reduces to a theorem of Schrijver which says that a @math --regular bipartite graph on @math vertices has at least @math perfect matchings. L. Gurvits proved an asymptotic version of the Lower Matching Conjecture, namely he proved that @math In this paper, we prove the Lower Matching Conjecture. In fact, we will prove a slightly stronger statement which gives an extra @math factor compared to the conjecture if @math is separated away from @math and @math , and is tight up to a constant factor if @math is separated away from @math . We will also give a new proof of Gurvits's and Schrijver's theorems, and we extend these theorems to @math --biregular bipartite graphs."
]
} |
1507.04739 | 2144865572 | We give a sharp lower bound on the number of matchings of a given size in a bipartite graph. When specialized to regular bipartite graphs, our results imply Friedland's Lower Matching Conjecture and Schrijver's theorem proven by Gurvits and Csikvari. Indeed, our work extends the recent work of Csikvari done for regular and bi-regular bipartite graphs. Moreover, our lower bounds are order optimal as they are attained for a sequence of @math -lifts of the original graph as well as for random @math -lifts of the original graph when @math tends to infinity. We then extend our results to permanents and subpermanents sums. For permanents, we are able to recover the lower bound of Schrijver recently proved by Gurvits using stable polynomials. Our proof is algorithmic and borrows ideas from the theory of local weak convergence of graphs, statistical physics and covers of graphs. We provide new lower bounds for subpermanents sums and obtain new results on the number of matching in random @math -lifts with some implications for the matching measure and the spectral measure of random @math -lifts as well as for the spectral measure of infinite trees. | Note that our theorem gives a generating function of the moments of @math since for @math sufficiently small, we have: z ^2 1+z ^2 d T(G) ( ) = i=1 ^ (-1)^ i+1 z^i ^ 2i d T(G) ( ), and the series is convergent since the support of all the @math and hence of @math is contained in @math . As shown by Godsil in @cite_9 , for finite trees, the spectral measure and the matching measure coincide, this is still true for infinite trees @cite_18 @cite_23 @cite_38 . In particular, the moments of the matching measure @math can be interpreted as the average number of closed walks on @math where the average is taken over the starting point of the walk (see Proposition for a precise definition of the random root as the starting point of the walk). To be more precise, for a finite graph @math , we denote by @math the real eigenvalues of its adjacency matrix and we define the empirical spectral measure of the graph @math as the probability measure on @math : = 1 v(G) i=1 ^ v(G) . | {
"cite_N": [
"@cite_38",
"@cite_9",
"@cite_18",
"@cite_23"
],
"mid": [
"",
"2055519174",
"2018762249",
"2764230287"
],
"abstract": [
"",
"The matching polynomial α(G, x) of a graph G is a form of the generating function for the number of sets of k independent edges of G. in this paper we show that if G is a graph with vertex v then there is a tree T with vertex w such that This result has a number of consequences. Here we use it to prove that α(G , 1 x) xα(G, 1 x) is the generating function for a certain class of walks in G. As an application of these results we then establish some new properties of α(G, x).",
"We analyze the convergence of the spectrum of large random graphs to the spectrum of a limit infinite graph. We apply these results to graphs converging locally to trees and derive a new formula for the Stieltjes transform of the spectral measure of such graphs. We illustrate our results on the uniform regular graphs, Erdos-Renyi graphs and graphs with a given degree sequence. We give examples of application for weighted graphs, bipartite graphs and the uniform spanning tree of n vertices. © 2010 Wiley Periodicals, Inc. Random Struct. Alg., 2010",
"We investigate the rank of the adjacency matrix of large diluted random graphs: for a sequence of graphs @math converging locally to a Galton--Watson tree @math (GWT), we provide an explicit formula for the asymptotic multiplicity of the eigenvalue 0 in terms of the degree generating function @math of @math . In the first part, we show that the adjacency operator associated with @math is always self-adjoint; we analyze the associated spectral measure at the root and characterize the distribution of its atomic mass at 0. In the second part, we establish a sufficient condition on @math for the expectation of this atomic mass to be precisely the normalized limit of the dimension of the kernel of the adjacency matrices of @math . Our proofs borrow ideas from analysis of algorithms, functional analysis, random matrix theory and statistical physics."
]
} |
1507.04725 | 900403624 | We show that on every Ramanujan graph @math , the simple random walk exhibits cutoff: when @math has @math vertices and degree @math , the total-variation distance of the walk from the uniform distribution at time @math is asymptotically @math where @math is a standard normal variable and @math is an explicit constant. Furthermore, for all @math , @math -regular Ramanujan graphs minimize the asymptotic @math -mixing time for SRW among all @math -regular graphs. Our proof also shows that, for every vertex @math in @math as above, its distance from @math of the vertices is asymptotically @math . | Durrett [ ] DurrettRGW conjectured in 2008 that the random walk should have cutoff on a uniformly chosen @math -regular graph on @math vertices (typically a good expander) with probability tending to 1 as @math ; indeed this is the case, as was verified by the first author and Sly @cite_14 in 2010. Subsequently, expanders without cutoff were constructed in @cite_27 , but these were highly asymmetric. The conjectured behavior of cutoff for all transitive expanders was reiterated in the latter works (see [Conjecture 6.1] LS-gnd and [ ] LSexp ), yet this was not verified nor refuted on any single example to date. | {
"cite_N": [
"@cite_27",
"@cite_14"
],
"mid": [
"1631603072",
"1526841711"
],
"abstract": [
"Linear expanders have numerous applications to theoretical computer science. Here we show that a regular bipartite graph is an expanderif and only if the second largest eigenvalue of its adjacency matrix is well separated from the first. This result, which has an analytic analogue for Riemannian manifolds enables one to generate expanders randomly and check efficiently their expanding properties. It also supplies an efficient algorithm for approximating the expanding properties of a graph. The exact determination of these properties is known to be coNP-complete.",
"1. Overview 2. Erdos-Renyi random graphs 3. Fixed degree distributions 4. Power laws 5. Small worlds 6. Random walks 7. CHKNS model."
]
} |
1507.04725 | 900403624 | We show that on every Ramanujan graph @math , the simple random walk exhibits cutoff: when @math has @math vertices and degree @math , the total-variation distance of the walk from the uniform distribution at time @math is asymptotically @math where @math is a standard normal variable and @math is an explicit constant. Furthermore, for all @math , @math -regular Ramanujan graphs minimize the asymptotic @math -mixing time for SRW among all @math -regular graphs. Our proof also shows that, for every vertex @math in @math as above, its distance from @math of the vertices is asymptotically @math . | The concentration of measure phenomenon in expanders, discovered by Alon and Milman @cite_15 , implies that the distance from a prescribed vertex is concentrated up to an @math -window. Formally, for every sequence of expander graphs @math on @math vertices and vertex @math there exists a sequence @math and constants @math so that, for every @math , Corollary shows that @math for Ramanujan graphs. | {
"cite_N": [
"@cite_15"
],
"mid": [
"1993111701"
],
"abstract": [
"Abstract A general method for obtaining asymptotic isoperimetric inequalities for families of graphs is developed. Some of these inequalities have been applied to functional analysis. This method uses the second smallest eigenvalue of a certain matrix associated with the graph and it is the discrete version of a method used before for Riemannian manifolds. Also some results are obtained on spectra of graphs that show how this eigenvalue is related to the structure of the graph. Combining these results with some known results on group representations many new examples are constructed explicitly of linear sized expanders and superconcentrators."
]
} |
1507.04725 | 900403624 | We show that on every Ramanujan graph @math , the simple random walk exhibits cutoff: when @math has @math vertices and degree @math , the total-variation distance of the walk from the uniform distribution at time @math is asymptotically @math where @math is a standard normal variable and @math is an explicit constant. Furthermore, for all @math , @math -regular Ramanujan graphs minimize the asymptotic @math -mixing time for SRW among all @math -regular graphs. Our proof also shows that, for every vertex @math in @math as above, its distance from @math of the vertices is asymptotically @math . | A new impetus for understanding distances in Ramanujan graphs is due to their role as building blocks in quantum computing; see the influential letter by P. Sarnak @cite_25 . Some of Sarnak's ideas were developed further by his student N.T. Sardari in an insightful paper @cite_19 posted to the arXiv a few months after the initial posting of the present paper. For a certain infinite family of @math -regular @math -vertex Ramanujan graphs, Sardari @cite_19 shows that the diameter is at least @math and also gives an alternative proof of the first part of Corollary . | {
"cite_N": [
"@cite_19",
"@cite_25"
],
"mid": [
"2182740957",
"1559907478"
],
"abstract": [
"For an infinite family of @math -regular LPS Ramanujan graphs, we show that the diameter of these graphs is greater than or equal to @math , where @math is an odd prime number and @math is the number of vertices. On the other hand, for any @math -regular Ramanujan graph we show that the distance of only tiny fraction of all pairs of vertices is greater than @math . We also have some numerical experiments for LPS Ramanujan graphs and random Cayley graphs which suggest that the diameters are asymptotically @math and @math , respectively. These are consistent with Sarnak's expectation on the covering exponent of universal quantum gates and our conjecture for the optimal strong approximation for quadratic forms in 4 variables.",
"The authors present a unified treatment of basic topics that arise in Fourier analysis. Their intention is to illustrate the role played by the structure of Euclidean spaces, particularly the action of translations, dilatations, and rotations, and to motivate the study of harmonic analysis on more general spaces having an analogous structure, e.g., symmetric spaces."
]
} |
1507.04314 | 2197211043 | Community-based question answering platforms can be rich sources of information on a variety of specialized topics, from finance to cooking. The usefulness of such platforms depends heavily on user contributions (questions and answers), but also on respecting the community rules. As a crowd-sourced service, such platforms rely on their users for monitoring and flagging content that violates community rules. Common wisdom is to eliminate the users who receive many flags. Our analysis of a year of traces from a mature Q&A site shows that the number of flags does not tell the full story: on one hand, users with many flags may still contribute positively to the community. On the other hand, users who never get flagged are found to violate community rules and get their accounts suspended. This analysis, however, also shows that abusive users are betrayed by their network properties: we find strong evidence of homophilous behavior and use this finding to detect abusive users who go under the community radar. Based on our empirical observations, we build a classifier that is able to detect abusive users with an accuracy as high as 83 . | Research in this area has investigated textual aspects of questions and answers. In so doing, it has proposed algorithmic solutions to automatically determine: the quality of questions @cite_8 @cite_22 and answers @cite_5 @cite_31 , the extent to which certain questions are easy to answer @cite_0 @cite_23 , and the type of a given question (e.g., factual or conversational) @cite_26 . | {
"cite_N": [
"@cite_26",
"@cite_22",
"@cite_8",
"@cite_0",
"@cite_23",
"@cite_5",
"@cite_31"
],
"mid": [
"2126104150",
"1969085038",
"2116875384",
"2280386769",
"2126908276",
"2057415299",
"2037858832"
],
"abstract": [
"Tens of thousands of questions are asked and answered every day on social question and answer (Q&A) Web sites such as Yahoo Answers. While these sites generate an enormous volume of searchable data, the problem of determining which questions and answers are archival quality has grown. One major component of this problem is the prevalence of conversational questions, identified both by Q&A sites and academic literature as questions that are intended simply to start discussion. For example, a conversational question such as \"do you believe in evolution?\" might successfully engage users in discussion, but probably will not yield a useful web page for users searching for information about evolution. Using data from three popular Q&A sites, we confirm that humans can reliably distinguish between these conversational questions and other informational questions, and present evidence that conversational questions typically have much lower potential archival value than informational questions. Further, we explore the use of machine learning techniques to automatically classify questions as conversational or informational, learning in the process about categorical, linguistic, and social differences between different question types. Our algorithms approach human performance, attaining 89.7 classification accuracy in our experiments.",
"At community question answering services, users are usually encouraged to rate questions by votes. The questions with the most votes are then recommended and ranked on the top when users browse questions by category. As users are not obligated to rate questions, usually only a small proportion of questions eventually gets rating. Thus, in this paper, we are concerned with learning to recommend questions from user ratings of a limited size. To overcome the data sparsity, we propose to utilize questions without users rating as well. Further, as there exist certain noises within user ratings (the preference of some users expressed in their ratings diverges from that of the majority of users), we design a new algorithm called 'majority-based perceptron algorithm' which can avoid the influence of noisy instances by emphasizing its learning over data instances from the majority users. Experimental results from a large collection of real questions confirm the effectiveness of our proposals.",
"Users tend to ask and answer questions in community question answering (CQA) services to seek information and share knowledge. A corollary is that myriad of questions and answers appear in CQA service. Accordingly, volumes of studies have been taken to explore the answer quality so as to provide a preliminary screening for better answers. However, to our knowledge, less attention has so far been paid to question quality in CQA. Knowing question quality provides us with finding and recommending good questions together with identifying bad ones which hinder the CQA service. In this paper, we are conducting two studies to investigate the question quality issue. The first study analyzes the factors of question quality and finds that the interaction between askers and topics results in the differences of question quality. Based on this finding, in the second study we propose a Mutual Reinforcement-based Label Propagation (MRLP) algorithm to predict question quality. We experiment with Yahoo! Answers data and the results demonstrate the effectiveness of our algorithm in distinguishing high-quality questions from low-quality ones.",
"All askers who post questions in Community-based Question Answering (CQA) sites such as Yahoo! Answers, Quora or Baidu's Zhidao, expect to receive an answer, and are frustrated when their questions remain unanswered. We propose to provide a type of \"heads up\" to askers by predicting how many answers, if at all, they will get. Giving a preemptive warning to the asker at posting time should reduce the frustration effect and hopefully allow askers to rephrase their questions if needed. To the best of our knowledge, this is the first attempt to predict the actual number of answers, in addition to predicting whether the question will be answered or not. To this effect, we introduce a new prediction model, specifically tailored to hierarchically structured CQA sites.We conducted extensive experiments on a large corpus comprising 1 year of answering activity on Yahoo! Answers, as opposed to a single day in previous studies. These experiments show that the F1 we achieved is 24 better than in previous work, mostly due the structure built into the novel model.",
"Synchronous social Q&A systems exist on the Web and in the enterprise to connect people with questions to people with answers in real-time. In such systems, askers' desire for quick answers is in tension with costs associated with interrupting numerous candidate answerers per question. Supporting users of synchronous social Q&A systems at various points in the question lifecycle (from conception to answer) helps askers make informed decisions about the likelihood of question success and helps answerers face fewer interruptions. For example, predicting that a question will not be well answered may lead the asker to rephrase or retract the question. Similarly, predicting that an answer is not forthcoming during the dialog can prompt system behaviors such as finding other answerers to join the conversation. As another example, predictions of asker satisfaction can be assigned to completed conversations and used for later retrieval. In this paper, we use data from an instant-messaging-based synchronous social Q&A service deployed to an online community of over two thousand users to study the prediction of: (i) whether a question will be answered, (ii) the number of candidate answerers that the question will be sent to, and (iii) whether the asker will be satisfied by the answer received. Predictions are made at many points of the question lifecycle (e.g., when the question is entered, when the answerer is located, halfway through the asker-answerer dialog, etc.). The findings from our study show that we can learn capable models for these tasks using a broad range of features derived from user profiles, system interactions, question setting, and the dialog between asker and answerer. Our research can lead to more sophisticated and more useful real-time Q&A support.",
"Question answering (QA) helps one go beyond traditional keywords-based querying and retrieve information in more precise form than given by a document or a list of documents. Several community-based QA (CQA) services have emerged allowing information seekers pose their information need as questions and receive answers from their fellow users. A question may receive multiple answers from multiple users and the asker or the community can choose the best answer. While the asker can thus indicate if he was satisfied with the information he received, there is no clear way of evaluating the quality of that information. We present a study to evaluate and predict the quality of an answer in a CQA setting. We chose Yahoo! Answers as such CQA service and selected a small set of questions, each with at least five answers. We asked Amazon Mechanical Turk workers to rate the quality of each answer for a given question based on 13 different criteria. Each answer was rated by five different workers. We then matched their assessments with the actual asker's rating of a given answer. We show that the quality criteria we used faithfully match with asker's perception of a quality answer. We furthered our investigation by extracting various features from questions, answers, and the users who posted them, and training a number of classifiers to select the best answer using those features. We demonstrate a high predictability of our trained models along with the relative merits of each of the features for such prediction. These models support our argument that in case of CQA, contextual information such as a user's profile, can be critical in evaluating and predicting content quality.",
"The quality of user-generated content varies drastically from excellent to abuse and spam. As the availability of such content increases, the task of identifying high-quality content sites based on user contributions --social media sites -- becomes increasingly important. Social media in general exhibit a rich variety of information sources: in addition to the content itself, there is a wide array of non-content information available, such as links between items and explicit quality ratings from members of the community. In this paper we investigate methods for exploiting such community feedback to automatically identify high quality content. As a test case, we focus on Yahoo! Answers, a large community question answering portal that is particularly rich in the amount and types of content and social interactions available in it. We introduce a general classification framework for combining the evidence from different sources of information, that can be tuned automatically for a given social media type and quality definition. In particular, for the community question answering domain, we show that our system is able to separate high-quality items from the rest with an accuracy close to that of humans"
]
} |
1507.04314 | 2197211043 | Community-based question answering platforms can be rich sources of information on a variety of specialized topics, from finance to cooking. The usefulness of such platforms depends heavily on user contributions (questions and answers), but also on respecting the community rules. As a crowd-sourced service, such platforms rely on their users for monitoring and flagging content that violates community rules. Common wisdom is to eliminate the users who receive many flags. Our analysis of a year of traces from a mature Q&A site shows that the number of flags does not tell the full story: on one hand, users with many flags may still contribute positively to the community. On the other hand, users who never get flagged are found to violate community rules and get their accounts suspended. This analysis, however, also shows that abusive users are betrayed by their network properties: we find strong evidence of homophilous behavior and use this finding to detect abusive users who go under the community radar. Based on our empirical observations, we build a classifier that is able to detect abusive users with an accuracy as high as 83 . | Research on CQA users has been mostly about understanding why users contribute content: that is, why users ask questions (askers are failed searchers, in that, they use CQA sites when web search fails @cite_10 ); and why they answer questions (e.g., they refrain from answering sensitive questions to avoid being reported for abuse and potentially lose access to the community @cite_11 ). | {
"cite_N": [
"@cite_10",
"@cite_11"
],
"mid": [
"2019432678",
"2032599523"
],
"abstract": [
"While Web search has become increasingly effective over the last decade, for many users' needs the required answers may be spread across many documents, or may not exist on the Web at all. Yet, many of these needs could be addressed by asking people via popular Community Question Answering (CQA) services, such as Baidu Knows, Quora, or Yahoo! Answers. In this paper, we perform the first large-scale analysis of how searchers become askers. For this, we study the logs of a major web search engine to trace the transformation of a large number of failed searches into questions posted on a popular CQA site. Specifically, we analyze the characteristics of the queries, and of the patterns of search behavior that precede posting a question; the relationship between the content of the attempted queries and of the posted questions; and the subsequent actions the user performs on the CQA site. Our work develops novel insights into searcher intent and behavior that lead to asking questions to the community, providing a foundation for more effective integration of automated web search and social information seeking.",
"Posing a question to an online question and answer community does not guarantee a response. Significant prior work has explored and identified members' motivations for contributing to communities of collective action (e.g., Yahoo! Answers); in contrast it is not well understood why members choose to not answer a question they have already read. To explore this issue, we surveyed 135 active members of Yahoo! Answers. We show that top and regular contributors experience the same reasons to not answer a question: subject nature and composition of the question; perception of how the questioner will receive, interpret and react to their response; and a belief that their response will lose its meaning and get lost in the crowd if too many responses have already been given. Informed by our results, we discuss opportunities to improve the efficacy of the question and answer process, and to encourage greater contributions through improved design."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.