aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
|---|---|---|---|---|
1309.7517
|
1577573289
|
Tag recommendation is a major aspect of collaborative tagging systems. It aims to recommend tags to a user for tagging an item. In this paper we present a part of our work in progress which is a novel improvement of recommendations by re-ranking the output of a tag recommender. We mine association rules between candidates tags in order to determine a more consistent list of tags to recommend. Our method is an add-on one which leads to better recommendations as we show in this paper. It is easily parallelizable and morever it may be applied to a lot of tag recommenders. The experiments we did on five datasets with two kinds of tag recommender demonstrated the efficiency of our method.
|
did a similar work enough but first applied a TF-IDF algorithm on the description of the item content, in order to extract keywords of the item @cite_1 . Based on the top keywords, they utilize association rules from history records in order to find the most probable tags to recommend. In addition, if the item has been tagged before by other users or the user has tagged other items before, then history information is also exploited to find the most appropriate recommendations.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"2577077885"
],
"abstract": [
"While a webpage usually contains hundreds of words, there are only two to three tags that would typically be assigned to this page. Most tags could be found in related aspects of the page, such as the page own content, the anchor texts around the page, and the user’s own opinion about the page. Thus it is not an easy job to extract the most appropriate two to three tags to recommend for a target user. In addition, the recommendations should be unique for every user, since everyone’s perspective for the page is different. In this paper, we treat the task of recommending tags as to find the most likely tags that would be chosen by the user. We first applied the TF-IDF algorithm on the limited description of the page content, in order to extract the keywords for the page. Based on these top keywords, association rules from history records are utilized to find the most probable tags to recommend. In addition, if the page has been tagged before by other users or the user has tagged other resources before, that history information is also exploited to find the most appropriate recommendations."
]
}
|
1309.7950
|
2103423338
|
Interfaces are recognized as an important mechanism to define contracts governing interactions between semi-independent software modules. Well-designed interfaces significantly reduce software complexity and ease maintainability by fostering modularization, hiding implementation details and minimizing the impact caused by changes in the software implementation. However, designing good interfaces is not a trivial task. The presence of interface design defects often yield increased development cost, lower code quality and reduced development productivity. Despite their importance, currently there are only a few research efforts that investigate the quality of interface design. In this paper, we identify and characterize common interface design anomalies and illustrate them via examples taken from well-known open source applications. In order to quantify the presence of interface design anomalies and estimate their impact on the interface design quality, as well on the software quality attributes, such as maintainability, we conduct an empirical study covering 9 open source projects. Building on our empirical results, we develop a set of recommendations to improve interface design.
|
In recent years there has been considerable interest in automatic detection and correction of design defects in object oriented software @cite_11 @cite_21 @cite_10 @cite_12 @cite_13 . Mens and Tourw 'e @cite_5 survey shows that existing approaches are mainly based on code metrics and predefined bad smells in source code @cite_4 @cite_14 @cite_3 @cite_11 .
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_10",
"@cite_21",
"@cite_3",
"@cite_5",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"2138661194",
"2158864412",
"2532677196",
"2149963636",
"2121866145",
"2171383742",
"2113157806",
"2097987957",
"2153887189"
],
"abstract": [
"This book will present guide-lines based on theory and data for establishing a metrics (environment) program for object-oriented software development.",
"Given the central role that software development plays in the delivery and application of information technology, managers are increasingly focusing on process improvement in the software development area. This demand has spurred the provision of a number of new and or improved approaches to software development, with perhaps the most prominent being object-orientation (OO). In addition, the focus on process improvement has increased the demand for software measures, or metrics with which to manage the process. The need for such metrics is particularly acute when an organization is adopting a new technology for which established practices have yet to be developed. This research addresses these needs through the development and implementation of a new suite of metrics for OO design. Metrics developed in previous research, while contributing to the field's understanding of software development processes, have generally been subject to serious criticisms, including the lack of a theoretical base. Following Wand and Weber (1989), the theoretical base chosen for the metrics was the ontology of Bunge (1977). Six design metrics are developed, and then analytically evaluated against Weyuker's (1988) proposed set of measurement principles. An automated data collection tool was then developed and implemented to collect an empirical sample of these metrics at two field sites in order to demonstrate their feasibility and suggest ways in which managers may use these metrics for process improvement. >",
"Software decay is a phenomenon that plagues aging software systems. While in recent years, there has been significant progress in the area of automatic detection of \"code smells\" on one hand, and code refactorings on the other hand, we claim that existing restructuring practices are seriously hampered by their symptomatic and informal (non-repeatable) nature. This paper makes a clear distinction between structural problems and structural symptoms (also known as code smells), and presents a novel, causal approach to restructuring object oriented systems. Our approach is based on two innovations: the encapsulation of correlations of symptoms and additional contextual information into higher-level design problems, and the univocal, explicit mapping of problems to unique refactoring solutions. Due to its explicit, repeatable nature, the approach shows high potential for increased levels of automation in the restructuring process, and consequently a decrease in maintenance costs.",
"Refactoring can have a direct influence on reducing the cost of software maintenance through changing the internal structure of the source-code to improve the overall design that helps the present and future programmers evolve and understand a system. Bad smells are a set of design problems with refactoring identified as a solution. Locating these bad smells has been described as more a human intuition than an exact science. This paper addresses the issue of identifying the characteristics of a bad smell through the use of a set of software metrics. Then by using a pre-defined set of interpretation rules to interpret the software metric results applied to Java source-code, the software engineer can be provided with significant guidance as to the location of bad smells. These issues are addressed in a number of ways. Firstly, a precise definition of bad smells is given from the informal descriptions given by the originators Fowler and Beck. The characteristics of the bad smells have been used to define a set of measurements and interpretation rules for a subset of the bad smells. A prototype tool has been implemented to enable the evaluation of the interpretation rules in two case studies.",
"This paper presents the results of a study in which we empirically investigated the suite of object-oriented (OO) design metrics introduced in (Chidamber and Kemerer, 1994). More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described in (Li and Henry, 1993) where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known OO analysis design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these OO metrics are discussed. Several of Chidamber and Kemerer's OO metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than \"traditional\" code metrics, which can only be collected at a later phase of the software development processes.",
"We provide an extensive overview of existing research in the field of software refactoring. This research is compared and discussed based on a number of different criteria: the refactoring activities that are supported, the specific techniques and formalisms that are used for supporting these activities, the types of software artifacts that are being refactored, the important issues that need to be taken into account when building refactoring tool support, and the effect of refactoring on the software process. A running example is used to explain and illustrate the main concepts.",
"Refactoring is widely practiced by developers, and considerable research and development effort has been invested in refactoring tools. However, little has been reported about the adoption of refactoring tools, and many assumptions about refactoring practice have little empirical support. In this paper, we examine refactoring tool usage and evaluate some of the assumptions made by other researchers. To measure tool usage, we randomly sampled code changes from four Eclipse and eight Mylyn developers and ascertained, for each refactoring, if it was performed manually or with tool support. We found that refactoring tools are seldom used: 11 percent by Eclipse developers and 9 percent by Mylyn developers. To understand refactoring practice at large, we drew from a variety of data sets spanning more than 39,000 developers, 240,000 tool-assisted refactorings, 2,500 developer hours, and 12,000 version control commits. Using these data, we cast doubt on several previously stated assumptions about how programmers refactor, while validating others. Finally, we interviewed the Eclipse and Mylyn developers to help us understand why they did not use refactoring tools and to gather ideas for future research.",
"Bad smells are signs of potential problems in code. Detecting and resolving bad smells, however, remain time-consuming for software engineers despite proposals on bad smell detection and refactoring tools. Numerous bad smells have been recognized, yet the sequences in which the detection and resolution of different kinds of bad smells are performed are rarely discussed because software engineers do not know how to optimize sequences or determine the benefits of an optimal sequence. To this end, we propose a detection and resolution sequence for different kinds of bad smells to simplify their detection and resolution. We highlight the necessity of managing bad smell resolution sequences with a motivating example, and recommend a suitable sequence for commonly occurring bad smells. We evaluate this recommendation on two nontrivial open source applications, and the evaluation results suggest that a significant reduction in effort ranging from 17.64 to 20 percent can be achieved when bad smells are detected and resolved using the proposed sequence.",
"Almost every expert in Object-Oriented Development stresses the importance of iterative development. As you proceed with the iterative development, you need to add function to the existing code base. If you are really lucky that code base is structured just right to support the new function while still preserving its design integrity. Of course most of the time we are not lucky, the code does not quite fit what we want to do. You could just add the function on top of the code base. But soon this leads to applying patch upon patch making your system more complex than it needs to be. This complexity leads to bugs, and cripples your productivity."
]
}
|
1309.7950
|
2103423338
|
Interfaces are recognized as an important mechanism to define contracts governing interactions between semi-independent software modules. Well-designed interfaces significantly reduce software complexity and ease maintainability by fostering modularization, hiding implementation details and minimizing the impact caused by changes in the software implementation. However, designing good interfaces is not a trivial task. The presence of interface design defects often yield increased development cost, lower code quality and reduced development productivity. Despite their importance, currently there are only a few research efforts that investigate the quality of interface design. In this paper, we identify and characterize common interface design anomalies and illustrate them via examples taken from well-known open source applications. In order to quantify the presence of interface design anomalies and estimate their impact on the interface design quality, as well on the software quality attributes, such as maintainability, we conduct an empirical study covering 9 open source projects. Building on our empirical results, we develop a set of recommendations to improve interface design.
|
On the one hand, a large set of software metrics have been proposed @cite_14 , the most known ones being the object-oriented metrics by Chidamber and Kemerer (CK) @cite_4 . Although the CK metrics are widely used and valuable, they do not address the particularities of interfaces. This was stated by Romano and Pinzger in their empirical studies for predicting change-prone interfaces @cite_17 .
|
{
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_17"
],
"mid": [
"2138661194",
"2158864412",
"2125121913"
],
"abstract": [
"This book will present guide-lines based on theory and data for establishing a metrics (environment) program for object-oriented software development.",
"Given the central role that software development plays in the delivery and application of information technology, managers are increasingly focusing on process improvement in the software development area. This demand has spurred the provision of a number of new and or improved approaches to software development, with perhaps the most prominent being object-orientation (OO). In addition, the focus on process improvement has increased the demand for software measures, or metrics with which to manage the process. The need for such metrics is particularly acute when an organization is adopting a new technology for which established practices have yet to be developed. This research addresses these needs through the development and implementation of a new suite of metrics for OO design. Metrics developed in previous research, while contributing to the field's understanding of software development processes, have generally been subject to serious criticisms, including the lack of a theoretical base. Following Wand and Weber (1989), the theoretical base chosen for the metrics was the ontology of Bunge (1977). Six design metrics are developed, and then analytically evaluated against Weyuker's (1988) proposed set of measurement principles. An automated data collection tool was then developed and implemented to collect an empirical sample of these metrics at two field sites in order to demonstrate their feasibility and suggest ways in which managers may use these metrics for process improvement. >",
"Recent empirical studies have investigated the use of source code metrics to predict the change- and defect-proneness of source code files and classes. While results showed strong correlations and good predictive power of these metrics, they do not distinguish between interface, abstract or concrete classes. In particular, interfaces declare contracts that are meant to remain stable during the evolution of a software system while the implementation in concrete classes is more likely to change. This paper aims at investigating to which extent the existing source code metrics can be used for predicting change-prone Java interfaces. We empirically investigate the correlation between metrics and the number of fine-grained source code changes in interfaces of ten Java open-source systems. Then, we evaluate the metrics to calculate models for predicting change-prone Java interfaces. Our results show that the external interface cohesion metric exhibits the strongest correlation with the number of source code changes. This metric also improves the performance of prediction models to classify Java interfaces into change-prone and not change-prone."
]
}
|
1309.7950
|
2103423338
|
Interfaces are recognized as an important mechanism to define contracts governing interactions between semi-independent software modules. Well-designed interfaces significantly reduce software complexity and ease maintainability by fostering modularization, hiding implementation details and minimizing the impact caused by changes in the software implementation. However, designing good interfaces is not a trivial task. The presence of interface design defects often yield increased development cost, lower code quality and reduced development productivity. Despite their importance, currently there are only a few research efforts that investigate the quality of interface design. In this paper, we identify and characterize common interface design anomalies and illustrate them via examples taken from well-known open source applications. In order to quantify the presence of interface design anomalies and estimate their impact on the interface design quality, as well on the software quality attributes, such as maintainability, we conduct an empirical study covering 9 open source projects. Building on our empirical results, we develop a set of recommendations to improve interface design.
|
On the other hand, Fowler and Beck @cite_11 propose a set of bad smells in OO class design: . They also propose refactorings for improving code quality with respect to the type of code smell. Based on Fowler and Beck's definitions of class smells, several approaches to automatically improving code quality have been developed. Murno proposes an approach based on code smells to automatically identify design smells and where to apply refactorings in a Software application @cite_21 . Trifu and Marinescu establish a clear distinction between OO structural problems and code smells, and present a causal approach to restructuring OO applications @cite_10 . Liu al @cite_12 provide a deep analysis of the relationships among different kinds of bad smells and their influence on resolution sequences.
|
{
"cite_N": [
"@cite_21",
"@cite_10",
"@cite_12",
"@cite_11"
],
"mid": [
"2149963636",
"2532677196",
"2097987957",
"2153887189"
],
"abstract": [
"Refactoring can have a direct influence on reducing the cost of software maintenance through changing the internal structure of the source-code to improve the overall design that helps the present and future programmers evolve and understand a system. Bad smells are a set of design problems with refactoring identified as a solution. Locating these bad smells has been described as more a human intuition than an exact science. This paper addresses the issue of identifying the characteristics of a bad smell through the use of a set of software metrics. Then by using a pre-defined set of interpretation rules to interpret the software metric results applied to Java source-code, the software engineer can be provided with significant guidance as to the location of bad smells. These issues are addressed in a number of ways. Firstly, a precise definition of bad smells is given from the informal descriptions given by the originators Fowler and Beck. The characteristics of the bad smells have been used to define a set of measurements and interpretation rules for a subset of the bad smells. A prototype tool has been implemented to enable the evaluation of the interpretation rules in two case studies.",
"Software decay is a phenomenon that plagues aging software systems. While in recent years, there has been significant progress in the area of automatic detection of \"code smells\" on one hand, and code refactorings on the other hand, we claim that existing restructuring practices are seriously hampered by their symptomatic and informal (non-repeatable) nature. This paper makes a clear distinction between structural problems and structural symptoms (also known as code smells), and presents a novel, causal approach to restructuring object oriented systems. Our approach is based on two innovations: the encapsulation of correlations of symptoms and additional contextual information into higher-level design problems, and the univocal, explicit mapping of problems to unique refactoring solutions. Due to its explicit, repeatable nature, the approach shows high potential for increased levels of automation in the restructuring process, and consequently a decrease in maintenance costs.",
"Bad smells are signs of potential problems in code. Detecting and resolving bad smells, however, remain time-consuming for software engineers despite proposals on bad smell detection and refactoring tools. Numerous bad smells have been recognized, yet the sequences in which the detection and resolution of different kinds of bad smells are performed are rarely discussed because software engineers do not know how to optimize sequences or determine the benefits of an optimal sequence. To this end, we propose a detection and resolution sequence for different kinds of bad smells to simplify their detection and resolution. We highlight the necessity of managing bad smell resolution sequences with a motivating example, and recommend a suitable sequence for commonly occurring bad smells. We evaluate this recommendation on two nontrivial open source applications, and the evaluation results suggest that a significant reduction in effort ranging from 17.64 to 20 percent can be achieved when bad smells are detected and resolved using the proposed sequence.",
"Almost every expert in Object-Oriented Development stresses the importance of iterative development. As you proceed with the iterative development, you need to add function to the existing code base. If you are really lucky that code base is structured just right to support the new function while still preserving its design integrity. Of course most of the time we are not lucky, the code does not quite fit what we want to do. You could just add the function on top of the code base. But soon this leads to applying patch upon patch making your system more complex than it needs to be. This complexity leads to bugs, and cripples your productivity."
]
}
|
1309.7950
|
2103423338
|
Interfaces are recognized as an important mechanism to define contracts governing interactions between semi-independent software modules. Well-designed interfaces significantly reduce software complexity and ease maintainability by fostering modularization, hiding implementation details and minimizing the impact caused by changes in the software implementation. However, designing good interfaces is not a trivial task. The presence of interface design defects often yield increased development cost, lower code quality and reduced development productivity. Despite their importance, currently there are only a few research efforts that investigate the quality of interface design. In this paper, we identify and characterize common interface design anomalies and illustrate them via examples taken from well-known open source applications. In order to quantify the presence of interface design anomalies and estimate their impact on the interface design quality, as well on the software quality attributes, such as maintainability, we conduct an empirical study covering 9 open source projects. Building on our empirical results, we develop a set of recommendations to improve interface design.
|
Boxall and Araban define a set of primitive counter metrics to measure the complexity and usage of interfaces @cite_9 . Their metrics return the number of interface methods, all arguments of the interface methods, the interface client classes, The authors in @cite_16 define more complex metrics that assess the interface design quality with regard to existing similarities among software interfaces, and with regard to the redundancy in interfaces hierarchies.
|
{
"cite_N": [
"@cite_9",
"@cite_16"
],
"mid": [
"2101519029",
"182787266"
],
"abstract": [
"Component-based software development relies on reusable components in order to improve quality and flexibility of products as well as increasing development productivity. This paradigm promotes deployment of reusable components as black-box units that can only work and communicate with one another through their well defined interfaces. Here, understandability of component interfaces is considered as a major quality affecting reusability of software components. A set of metrics for measuring properties believed to be relevant to understandability and reusability of software components are presented. Then, their usefulness and relevance are analyzed based upon empirical data gathered from the measurement of a variety of component interfaces. We conclude with some ideas for further research in this area.",
"Recent studies have largely investigated the detection of class design anomalies. They proposed a large set of metrics that help in detecting those anomalies and in predicting the quality of class design. While those studies and the proposed metrics are valuable, they do not address the particularities of software interfaces. Interfaces define the contracts that spell out how software modules and logic units interact with each other. This paper proposes a list of design defects related to interfaces: shared similarity between interfaces, interface clones and redundancy in interface hierarchy. We identify and describe those design defects through real examples, taken from well-known Java applications. Then we define three metrics that help in automatically estimating the interface design quality, regarding the proposed design anomalies, and identify refactoring candidates. We investigate our metrics and show their usefulness through an empirical study conducted on three large Java applications."
]
}
|
1309.7950
|
2103423338
|
Interfaces are recognized as an important mechanism to define contracts governing interactions between semi-independent software modules. Well-designed interfaces significantly reduce software complexity and ease maintainability by fostering modularization, hiding implementation details and minimizing the impact caused by changes in the software implementation. However, designing good interfaces is not a trivial task. The presence of interface design defects often yield increased development cost, lower code quality and reduced development productivity. Despite their importance, currently there are only a few research efforts that investigate the quality of interface design. In this paper, we identify and characterize common interface design anomalies and illustrate them via examples taken from well-known open source applications. In order to quantify the presence of interface design anomalies and estimate their impact on the interface design quality, as well on the software quality attributes, such as maintainability, we conduct an empirical study covering 9 open source projects. Building on our empirical results, we develop a set of recommendations to improve interface design.
|
The authors in @cite_17 investigated the suitability of existing source code metrics to classify Software interfaces into change-prone and not change-prone. The metrics used in the study were Chidamber and Kemerer (CK) metrics, interface complexity and usage metrics and the IUC metric. They empirically evaluated their model for predicting change-prone Software interfaces by investigating the correlation between metrics and the number of changes in interfaces of ten Software open-source systems. The paper concluded that most of the CK metrics are not sound for interfaces and only perform well for predicting change-prone concrete and abstract classes. Therefore this confirms the claim that interfaces need to be treated separately. The IUC metric exhibits the strongest correlation with the number of interface changed. Hence IUC can improve the performance of prediction models for classifying Software interfaces into change-prone and not change-prone.
|
{
"cite_N": [
"@cite_17"
],
"mid": [
"2125121913"
],
"abstract": [
"Recent empirical studies have investigated the use of source code metrics to predict the change- and defect-proneness of source code files and classes. While results showed strong correlations and good predictive power of these metrics, they do not distinguish between interface, abstract or concrete classes. In particular, interfaces declare contracts that are meant to remain stable during the evolution of a software system while the implementation in concrete classes is more likely to change. This paper aims at investigating to which extent the existing source code metrics can be used for predicting change-prone Java interfaces. We empirically investigate the correlation between metrics and the number of fine-grained source code changes in interfaces of ten Java open-source systems. Then, we evaluate the metrics to calculate models for predicting change-prone Java interfaces. Our results show that the external interface cohesion metric exhibits the strongest correlation with the number of source code changes. This metric also improves the performance of prediction models to classify Java interfaces into change-prone and not change-prone."
]
}
|
1309.7319
|
2006979232
|
Abstract Let λ 1 , … , λ n denote the eigenvalues of a n × n matrix, ordered by nonincreasing absolute value, and let γ 1 ≥ ⋯ ≥ γ n denote the tropical eigenvalues of an associated n × n matrix, obtained by replacing every entry of the original matrix by its absolute value. We show that for all 1 ≤ k ≤ n , | λ 1 ⋯ λ k | ≤ C n , k γ 1 ⋯ γ k , where C n , k is a combinatorial constant depending only on k and on the pattern of the matrix. This generalizes an inequality by Friedland (1986), corresponding to the special case k = 1 .
|
The present inequalities generalize a theorem of Friedland, who showed in @cite_2 that for a nonnegative matrix @math , we have where @math is the of @math , defined to be [ (A) = i_1, ,i_ ( a_ i_1,i_2 a_ i_2,i_3 a_ i_ ,i_1 ) ^ 1 ; , ] the maximum being taken over all sequences @math of distinct elements of @math . Since @math , the second inequality in corresponds to the case @math in .
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2025314573"
],
"abstract": [
"Abstract In this paper we give a simple closed formula for a certain limit eigenvalue of a nonnegative matrix. We bound the spectral radius of a nonnegative matrix in terms of this limit eigenvalue."
]
}
|
1309.7319
|
2006979232
|
Abstract Let λ 1 , … , λ n denote the eigenvalues of a n × n matrix, ordered by nonincreasing absolute value, and let γ 1 ≥ ⋯ ≥ γ n denote the tropical eigenvalues of an associated n × n matrix, obtained by replacing every entry of the original matrix by its absolute value. We show that for all 1 ≤ k ≤ n , | λ 1 ⋯ λ k | ≤ C n , k γ 1 ⋯ γ k , where C n , k is a combinatorial constant depending only on k and on the pattern of the matrix. This generalizes an inequality by Friedland (1986), corresponding to the special case k = 1 .
|
When considering a field with an archimedean absolute value, like the field of complex numbers equipped with its usual absolute value, the amoeba of a hypersurface does not coincide any longer with a tropical hypersurface, however, it can be approximated by such a hypersurface, called spine, in particular, Passare and Rullg rd @cite_20 showed that the latter is a deformation retract of the former. In a recent work, Avenda n o, Kogan, Nisse and Rojas @cite_23 gave estimates of the distance between a tropical hypersurface which is a more easily computable variant of the spine, and the amoeba of a original hypersurface. However, it does not seem that the present bounds could be derived by the same method.
|
{
"cite_N": [
"@cite_23",
"@cite_20"
],
"mid": [
"2962767068",
"2094372703"
],
"abstract": [
"Abstract Given any complex Laurent polynomial f , Amoeba ( f ) is the image of its complex zero set under the coordinate-wise log absolute value map. We discuss an efficiently constructible polyhedral approximation, ArchTrop ( f ) , of Amoeba ( f ) , and derive explicit upper and lower bounds, solely as a function of the number of monomial terms of f , for the Hausdorff distance between these two sets. We also show that deciding whether a given point lies in ArchTrop ( f ) is doable in polynomial-time, for any fixed dimension, unlike the corresponding problem for Amoeba ( f ) , which is NP -hard already in one variable. ArchTrop ( f ) can thus serve as a canonical low order approximation to start a higher order iterative polynomial system solving algorithm, e.g., homotopy continuation.",
"The amoeba of a holomorphic function @math is, by definition, the image in @math of the zero locus of @math under the simple mapping that takes each coordinate to the logarithm of its modulus. The terminology was introduced in the 1990s by the famous (biologist and) mathematician Israel Gelfand and his coauthors Kapranov and Zelevinsky (GKZ). In this paper we study a natural convex potential function @math with the property that its Monge-Ampere mass is concentrated to the amoeba of @math We obtain results of two kinds; by approximating @math with a piecewise linear function, we get striking combinatorial information regarding the amoeba and the Newton polytope of @math ; by computing the Monge-Ampere measure, we find sharp bounds for the area of amoebas in @math . We also consider systems of functions @math and prove a local version of the classical Bernstein theorem on the number of roots of systems of algebraic equations."
]
}
|
1309.7319
|
2006979232
|
Abstract Let λ 1 , … , λ n denote the eigenvalues of a n × n matrix, ordered by nonincreasing absolute value, and let γ 1 ≥ ⋯ ≥ γ n denote the tropical eigenvalues of an associated n × n matrix, obtained by replacing every entry of the original matrix by its absolute value. We show that for all 1 ≤ k ≤ n , | λ 1 ⋯ λ k | ≤ C n , k γ 1 ⋯ γ k , where C n , k is a combinatorial constant depending only on k and on the pattern of the matrix. This generalizes an inequality by Friedland (1986), corresponding to the special case k = 1 .
|
We note that a different generalization of the Hadamard-Ostrowski-P 'olya theorem, dealing with the case of matrix polynomials, not relying on tropical eigenvalues, but thinking of the norm as a valuation'', appeared recently in @cite_4 , refining a result of @cite_24 . Tropical eigenvalues generally lead to tighter estimates in the case of structured or sparse matrices.
|
{
"cite_N": [
"@cite_24",
"@cite_4"
],
"mid": [
"1856442002",
"1818090810"
],
"abstract": [
"The eigenvalues of a matrix polynomial can be determined classically by solving a generalized eigenproblem for a linearized matrix pencil, for instance by writing the matrix polynomial in companion form. We introduce a general scaling technique, based on tropical algebra, which applies in particular to this companion form. This scaling, which is inspired by an earlier work of Akian, Bapat, and Gaubert, relies on the computation of “tropical roots”. We give explicit bounds, in a typical case, indicating that these roots provide accurate estimates of the order of magnitude of the different eigenvalues, and we show by experiments that this scaling improves the accuracy (measured by normwise backward error) of the computations, particularly in situations in which the data have various orders of magnitude. In the case of quadratic polynomial matrices, we recover in this way a scaling due to Fan, Lin, and Van Dooren, which coincides with the tropical scaling when the two tropical roots are equal. If not, the eigenvalues generally split in two groups, and the tropical method leads to making one specific scaling for each of the groups.",
"We show that the sequence of moduli of the eigenvalues of a matrix polynomial is log-majorized, up to universal constants, by a sequence of \"tropical roots\" depending only on the norms of the matrix coefficients. These tropical roots are the non-differentiability points of an auxiliary tropical polynomial, or equivalently, the opposites of the slopes of its Newton polygon. This extends to the case of matrix polynomials some bounds obtained by Hadamard, Ostrowski and PA³lya for the roots of scalar polynomials. We also obtain new bounds in the scalar case, which are accurate for \"fewnomials\" or when the tropical roots are well separated."
]
}
|
1309.7512
|
2951531713
|
Submodular functions can be exactly minimized in polynomial time, and the special case that graph cuts solve with max flow KZ:PAMI04 has had significant impact in computer vision BVZ:PAMI01,Kwatra:SIGGRAPH03,Rother:GrabCut04 . In this paper we address the important class of sum-of-submodular (SoS) functions Arora:ECCV12,Kolmogorov:DAM12 , which can be efficiently minimized via a variant of max flow called submodular flow Edmonds:ADM77 . SoS functions can naturally express higher order priors involving, e.g., local image patches; however, it is difficult to fully exploit their expressive power because they have so many parameters. Rather than trying to formulate existing higher order priors as an SoS function, we take a discriminative learning approach, effectively searching the space of SoS functions for a higher order prior that performs well on our training set. We adopt a structural SVM approach Joachims et al 09a,Tsochantaridis et al 04 and formulate the training problem in terms of quadratic programming; as a result we can efficiently search the space of SoS priors via an extended cutting-plane algorithm. We also show how the state-of-the-art max flow method for vision problems Goldberg:ESA11 can be modified to efficiently solve the submodular flow problem. Experimental comparisons are made against the OpenCV implementation of the GrabCut interactive segmentation technique Rother:GrabCut04 , which uses hand-tuned parameters instead of machine learning. On a standard dataset Gulshan:CVPR10 our method learns higher order priors with hundreds of parameter values, and produces significantly better segmentations. While our focus is on binary labeling problems, we show that our techniques can be naturally generalized to handle more than two labels.
|
Many learning problems in computer vision can be cast as structured output prediction, which allows learning outputs with spatial coherence. Among the most popular generic methods for structured output learning are Conditional Random Fields (CRFs) trained by maximum conditional likelihood @cite_23 , Maximum-Margin Markov Networks (M3N) @cite_9 , and Structural Support Vector Machines (SVM-struct) @cite_14 @cite_29 . A key advantage of M3N and SVM-struct over CRFs is that training does not require computation of the partition function. Among the two large-margin approaches M3N and SVM-struct, we follow the SVM-struct methodology since it allows the use of efficient inference procedures during training.
|
{
"cite_N": [
"@cite_29",
"@cite_9",
"@cite_14",
"@cite_23"
],
"mid": [
"2031248101",
"",
"2429914308",
"2147880316"
],
"abstract": [
"Discriminative training approaches like structural SVMs have shown much promise for building highly complex and accurate models in areas like natural language processing, protein structure prediction, and information retrieval. However, current training algorithms are computationally expensive or intractable on large datasets. To overcome this bottleneck, this paper explores how cutting-plane methods can provide fast training not only for classification SVMs, but also for structural SVMs. We show that for an equivalent \"1-slack\" reformulation of the linear SVM training problem, our cutting-plane method has time complexity linear in the number of training examples. In particular, the number of iterations does not depend on the number of training examples, and it is linear in the desired precision and the regularization parameter. Furthermore, we present an extensive empirical evaluation of the method applied to binary classification, multi-class classification, HMM sequence tagging, and CFG parsing. The experiments show that the cutting-plane algorithm is broadly applicable and fast in practice. On large datasets, it is typically several orders of magnitude faster than conventional training methods derived from decomposition methods like SVM-light, or conventional cutting-plane methods. Implementations of our methods are available at www.joachims.org .",
"",
"Learning general functional dependencies is one of the main goals in machine learning. Recent progress in kernel-based methods has focused on designing flexible and powerful input representations. This paper addresses the complementary issue of problems involving complex outputs such as multiple dependent output variables and structured output spaces. We propose to generalize multiclass Support Vector Machine learning in a formulation that involves features extracted jointly from inputs and outputs. The resulting optimization problem is solved efficiently by a cutting plane algorithm that exploits the sparseness and structural decomposition of the problem. We demonstrate the versatility and effectiveness of our method on problems ranging from supervised grammar learning and named-entity recognition, to taxonomic text classification and sequence alignment.",
"We present conditional random fields , a framework for building probabilistic models to segment and label sequence data. Conditional random fields offer several advantages over hidden Markov models and stochastic grammars for such tasks, including the ability to relax strong independence assumptions made in those models. Conditional random fields also avoid a fundamental limitation of maximum entropy Markov models (MEMMs) and other discriminative Markov models based on directed graphical models, which can be biased towards states with few successor states. We present iterative parameter estimation algorithms for conditional random fields and compare the performance of the resulting models to HMMs and MEMMs on synthetic and natural-language data."
]
}
|
1309.7512
|
2951531713
|
Submodular functions can be exactly minimized in polynomial time, and the special case that graph cuts solve with max flow KZ:PAMI04 has had significant impact in computer vision BVZ:PAMI01,Kwatra:SIGGRAPH03,Rother:GrabCut04 . In this paper we address the important class of sum-of-submodular (SoS) functions Arora:ECCV12,Kolmogorov:DAM12 , which can be efficiently minimized via a variant of max flow called submodular flow Edmonds:ADM77 . SoS functions can naturally express higher order priors involving, e.g., local image patches; however, it is difficult to fully exploit their expressive power because they have so many parameters. Rather than trying to formulate existing higher order priors as an SoS function, we take a discriminative learning approach, effectively searching the space of SoS functions for a higher order prior that performs well on our training set. We adopt a structural SVM approach Joachims et al 09a,Tsochantaridis et al 04 and formulate the training problem in terms of quadratic programming; as a result we can efficiently search the space of SoS priors via an extended cutting-plane algorithm. We also show how the state-of-the-art max flow method for vision problems Goldberg:ESA11 can be modified to efficiently solve the submodular flow problem. Experimental comparisons are made against the OpenCV implementation of the GrabCut interactive segmentation technique Rother:GrabCut04 , which uses hand-tuned parameters instead of machine learning. On a standard dataset Gulshan:CVPR10 our method learns higher order priors with hundreds of parameter values, and produces significantly better segmentations. While our focus is on binary labeling problems, we show that our techniques can be naturally generalized to handle more than two labels.
|
In this paper, we will learn submodular discriminant functions. Prior work on learning submodular functions falls into three categories: submodular function regression @cite_11 , maximization of submodular discriminant functions, and minimization of submodular discriminant functions.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"340337472"
],
"abstract": [
"Submodular functions are discrete functions that model laws of diminishing returns and enjoy numerous algorithmic applications that have been used in many areas, including combinatorial optimization, machine learning, and economics. In this work we use a learning theoretic angle for studying submodular functions. We provide algorithms for learning submodular functions, as well as lower bounds on their learnability. In doing so, we uncover several novel structural results revealing both extremal properties as well as regularities of submodular functions, of interest to many areas."
]
}
|
1309.7512
|
2951531713
|
Submodular functions can be exactly minimized in polynomial time, and the special case that graph cuts solve with max flow KZ:PAMI04 has had significant impact in computer vision BVZ:PAMI01,Kwatra:SIGGRAPH03,Rother:GrabCut04 . In this paper we address the important class of sum-of-submodular (SoS) functions Arora:ECCV12,Kolmogorov:DAM12 , which can be efficiently minimized via a variant of max flow called submodular flow Edmonds:ADM77 . SoS functions can naturally express higher order priors involving, e.g., local image patches; however, it is difficult to fully exploit their expressive power because they have so many parameters. Rather than trying to formulate existing higher order priors as an SoS function, we take a discriminative learning approach, effectively searching the space of SoS functions for a higher order prior that performs well on our training set. We adopt a structural SVM approach Joachims et al 09a,Tsochantaridis et al 04 and formulate the training problem in terms of quadratic programming; as a result we can efficiently search the space of SoS priors via an extended cutting-plane algorithm. We also show how the state-of-the-art max flow method for vision problems Goldberg:ESA11 can be modified to efficiently solve the submodular flow problem. Experimental comparisons are made against the OpenCV implementation of the GrabCut interactive segmentation technique Rother:GrabCut04 , which uses hand-tuned parameters instead of machine learning. On a standard dataset Gulshan:CVPR10 our method learns higher order priors with hundreds of parameter values, and produces significantly better segmentations. While our focus is on binary labeling problems, we show that our techniques can be naturally generalized to handle more than two labels.
|
Learning of submodular discriminant functions where a prediction is computed through maximization has widespread use in information retrieval, where submodularity models diversity in the ranking of a search engine @cite_5 @cite_15 or in an automatically generated abstract @cite_19 . While exact (monotone) submodular maximization is intractible, approximate inference using a simple greedy algorithm has approximation guarantees and generally excellent performance in practice.
|
{
"cite_N": [
"@cite_19",
"@cite_5",
"@cite_15"
],
"mid": [
"2953050252",
"2104895009",
"1492009297"
],
"abstract": [
"In this paper, we present a supervised learning approach to training submodular scoring functions for extractive multi-document summarization. By taking a structured predicition approach, we provide a large-margin method that directly optimizes a convex relaxation of the desired performance measure. The learning method applies to all submodular summarization methods, and we demonstrate its effectiveness for both pairwise as well as coverage-based scoring functions on multiple datasets. Compared to state-of-the-art functions that were tuned manually, our method significantly improves performance and enables high-fidelity models with numbers of parameters well beyond what could reasonbly be tuned by hand.",
"In many retrieval tasks, one important goal involves retrieving a diverse set of results (e.g., documents covering a wide range of topics for a search query). First of all, this reduces redundancy, effectively showing more information with the presented results. Secondly, queries are often ambiguous at some level. For example, the query \"Jaguar\" can refer to many different topics (such as the car or feline). A set of documents with high topic diversity ensures that fewer users abandon the query because no results are relevant to them. Unlike existing approaches to learning retrieval functions, we present a method that explicitly trains to diversify results. In particular, we formulate the learning problem of predicting diverse subsets and derive a training method based on structural SVMs.",
"We introduce a method to learn a mixture of submodular \"shells\" in a large-margin setting. A submodular shell is an abstract submodular function that can be instantiated with a ground set and a set of parameters to produce a submodular function. A mixture of such shells can then also be so instantiated to produce a more complex submodular function. What our algorithm learns are the mixture weights over such shells. We provide a risk bound guarantee when learning in a large-margin structured-prediction setting using a projected subgradient method when only approximate submodular optimization is possible (such as with submodular function maximization). We apply this method to the problem of multi-document summarization and produce the best results reported so far on the widely used NIST DUC-05 through DUC-07 document summarization corpora."
]
}
|
1309.7640
|
2131030647
|
Many electronic content providers today like Flickr and Google, offer space to users to publish their electronic media(e.g. photos and videos) in their cloud infrastructures so that they can be publicly accessed. Features like including other information, such as keywords or owner information into the digital material is already offered by existing providers. Despite the useful features made available to users by such infrastructures, the authorship of the published content is not protected against various attacks such as compression. In this paper we propose a robust scheme that uses digital invisible watermarking and hashing to protect the authorship of the digital content and provide resistance against malicious manipulation of multimedia content. The scheme is enhanced by an algorithm called MMBEC, that is an extension of an established scheme MBEC towards higher resistance.
|
The work by Herrigel et at. in @cite_8 is a characteristic example of watemarking technique for images, which belongs to the first category of approaches. Their technique provides robustness against various attacks with the advantage of not needing the original cover-image for the watermark detection. Contrary to our scheme, their method uses public-key cryptography and requires PKI infrastructure for supporting the distribution of public keys between the involved parties for mutual authentication. PKI infrastructure has also been found necessary to other solutions @cite_15 , used for video watermarking. Our approach has the advantage of neither needing the cover image to be provided in the verification, avoiding any exposure to other parties, nor does it need a PKI to be set up.
|
{
"cite_N": [
"@cite_15",
"@cite_8"
],
"mid": [
"2115677140",
"1849530807"
],
"abstract": [
"In this paper, we describe a configurable content-based MPEG video authentication scheme, which is robust to typical video transcoding approaches, namely frame resizing, frame dropping and requantization. By exploiting the synergy between cryptographic signature, forward error correction (FEC) and digital watermarking, the generated content-based message authentication code (MAC or keyed crypto hash) is embedded back into the video to reduce the transmission cost. The proposed scheme is secure against malicious attacks such as video frame insertion and alteration. System robustness and security are balanced in a configurable way (i.e., more robust the system is, less secure the system will be). Compressed-domain process makes the scheme computationally efficient. Furthermore, the proposed scheme is compliant with state-of-the-art public key infrastructure. Experimental results demonstrate the validity of the proposed scheme",
"This paper presents a new approach for the secure and robust copyright protection of digital images. A system for generating digital watermarks and for trading watermarked images is described. The system is based on a new watermarking technique, which is robust against image transformation techniques such as compression, rotation, translation, scaling and cropping. It uses modulation of the magnitude components in Fourier space to embed a watermark and an accompanying template and, during watermark extraction, reads a template in the log polar transform of the frequency domain. The template is used for analyzing scaling and rotation suffered by the watermarked stego-image. The detection of the watermarks is also possible without any need for the original cover-image. In addition, the system applies asymmetric cryptographic protocols for different purposes, namely embedding detecting the watermark and transferring watermarked data. The public key technique is applied for the construction of a one-way watermark embedding and the verification function to identify and prove the uniqueness of the watermark. Legal dispute resolution is supported for the multiple watermarking of a digital image without revealing the confidential keying information."
]
}
|
1309.7640
|
2131030647
|
Many electronic content providers today like Flickr and Google, offer space to users to publish their electronic media(e.g. photos and videos) in their cloud infrastructures so that they can be publicly accessed. Features like including other information, such as keywords or owner information into the digital material is already offered by existing providers. Despite the useful features made available to users by such infrastructures, the authorship of the published content is not protected against various attacks such as compression. In this paper we propose a robust scheme that uses digital invisible watermarking and hashing to protect the authorship of the digital content and provide resistance against malicious manipulation of multimedia content. The scheme is enhanced by an algorithm called MMBEC, that is an extension of an established scheme MBEC towards higher resistance.
|
It is known that watermarking-based approaches that make use of must involve a TTP that take part in the embedding phase. Such solution has been reported to have potential drawbacks @cite_12 . In @cite_1 , propose an approach based on the use of . Their protocol allows a prover to convince a verifier of the presence of a watermark without revealing any information that the verifier could use to remove the watermark, something known as . Nevertheless, their protocols still require the existence of another arbitrary Third party along with a Registration Center entity. On the contrary, our approach, having the advantages of , it uses a much simpler but efficient protocol that employs one-way functions for achieving the same result.
|
{
"cite_N": [
"@cite_1",
"@cite_12"
],
"mid": [
"1810933997",
"1501096649"
],
"abstract": [
"The goal of zero-knowledge watermark detection is to allow a prover to soundly convince a verifier of the presence of a watermark in certain stego-data without revealing any information which the verifier can use to remove the watermark. Existing proposals do not achieve this goal in terms of definition (not formally zero-knowledge), security (unproven) and coverage (handle only blind watermark detection).In this paper we define zero-knowledge watermark detection precisely. We then propose efficient and provably secure zero-knowledge protocols for blind and non-blind versions of a well-known class of watermarking schemes. Using these protocols the security and efficiency of many watermark based applications can be significantly improved.As an example of use we propose concrete protocols for direct proof of ownership which enable offline ownership proofs, i.e., copyright holders can prove their rightful ownership to anyone without involving a trusted third party in the actual proof protocol.",
"Recently, two generic watermarking protocols were proposed, based on a popular zero-knowledge identification protocol. In this paper, we show that both protocols are flawed and therefore fail to achieve their purpose of allowing a prover to prove to a verifier of his ownership of a watermarked image. We also give some suggestions to fix these flaws."
]
}
|
1309.7472
|
2260540739
|
A comprehensive framework for detection and characterization of overlapping intrinsic symmetry over 3D shapes is proposed. To identify prominent symmetric regions which overlap in space and vary in form, the proposed framework is decoupled into a Correspondence Space Voting procedure followed by a Transformation Space Mapping procedure. In the correspondence space voting procedure, significant symmetries are first detected by identifying surface point pairs on the input shape that exhibit local similarity in terms of their intrinsic geometry while simultaneously maintaining an intrinsic distance structure at a global level. Since different point pairs can share a common point, the detected symmetric shape regions can potentially overlap. To this end, a global intrinsic distance-based voting technique is employed to ensure the inclusion of only those point pairs that exhibit significant symmetry. In the transformation space mapping procedure, the Functional Map framework is employed to generate the final map of symmetries between point pairs. The transformation space mapping procedure ensures the retrieval of the underlying dense correspondence map throughout the 3D shape that follows a particular symmetry. Additionally, the formulation of a novel cost matrix enables the inner product to succesfully indicate the complexity of the underlying symmetry transformation. The proposed transformation space mapping procedure is shown to result in the formulation of a semi-metric symmetry space where each point in the space represents a specific symmetry transformation and the distance between points represents the complexity between the corresponding transformations. Experimental results show that the proposed framework can successfully process complex 3D shapes that possess rich symmetries.
|
The research literature on symmetry detection has grown substantially in recent years as shown in the excellent survey by Mitra et. al. @cite_11 . In this paper, we do not attempt to provide an exhaustive exposition of the state of the art in symmetry detection; rather we focus on discussing existing works that are most closely related to our proposed approach.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"1580065766"
],
"abstract": [
"The concept of symmetry has received significant attention in computer graphics and computer vision research in recent years. Numerous methods have been proposed to find, extract, encode and exploit geometric symmetries and high-level structural information for a wide variety of geometry processing tasks. This report surveys and classifies recent developments in symmetry detection. We focus on elucidating the key similarities and differences between existing methods to gain a better understanding of a fundamental problem in digital geometry processing and shape understanding in general. We discuss a variety of applications in computer graphics and geometry processing that benefit from symmetry information for more effective processing. An analysis of the strengths and limitations of existing algorithms highlights the plenitude of opportunities for future research both in terms of theory and applications."
]
}
|
1309.7472
|
2260540739
|
A comprehensive framework for detection and characterization of overlapping intrinsic symmetry over 3D shapes is proposed. To identify prominent symmetric regions which overlap in space and vary in form, the proposed framework is decoupled into a Correspondence Space Voting procedure followed by a Transformation Space Mapping procedure. In the correspondence space voting procedure, significant symmetries are first detected by identifying surface point pairs on the input shape that exhibit local similarity in terms of their intrinsic geometry while simultaneously maintaining an intrinsic distance structure at a global level. Since different point pairs can share a common point, the detected symmetric shape regions can potentially overlap. To this end, a global intrinsic distance-based voting technique is employed to ensure the inclusion of only those point pairs that exhibit significant symmetry. In the transformation space mapping procedure, the Functional Map framework is employed to generate the final map of symmetries between point pairs. The transformation space mapping procedure ensures the retrieval of the underlying dense correspondence map throughout the 3D shape that follows a particular symmetry. Additionally, the formulation of a novel cost matrix enables the inner product to succesfully indicate the complexity of the underlying symmetry transformation. The proposed transformation space mapping procedure is shown to result in the formulation of a semi-metric symmetry space where each point in the space represents a specific symmetry transformation and the distance between points represents the complexity between the corresponding transformations. Experimental results show that the proposed framework can successfully process complex 3D shapes that possess rich symmetries.
|
Several recent approaches to detect approximate and partial extrinsic symmetries have focused on algorithms that cluster votes for symmetries in a parameterized “transformation space” @cite_24 ; @cite_30 ; @cite_31 ; @cite_19 ]. @cite_30 generate “votes” in a transformation space to align pairs of similar points and then cluster them in a space spanned by the parameters of the potential symmetry transformations. Regardless of how good the shape descriptors are, the aforementioned methods are not effective at finding correspondences between points in complex symmetry orbits that are spread across multiple distinct clusters in the transformation space. Since the dimensionality of the transformation space increases with the complexity of the symmetry, the voting procedure in transformation space becomes increasing intractable when dealing with complex symmetries.
|
{
"cite_N": [
"@cite_24",
"@cite_19",
"@cite_31",
"@cite_30"
],
"mid": [
"2097441399",
"",
"1972062985",
"2060206980"
],
"abstract": [
"We propose a randomized method for the detection of symmetry in polyhedra without assuming the predetermination of the centroids of the objects. Using a voting process, which is the main concept of the Hough transform in image processing, we transform the geometric computation for symmetry detection based on graph theory, to the peak detection problem in a voting space in the context of the Hough transform.",
"",
"In this paper, a simple and yet robust Hough transform algorithm is proposed to detect and analyze reflectional symmetry and skew-symmetry (reflectional symmetry under parallel projection). It is applicable to shapes that contain global, local and slightly deformed reflectional skew-symmetries under the presence of noise and occlusion.",
"\"Symmetry is a complexity-reducing concept [...]; seek it every-where.\" - Alan J. PerlisMany natural and man-made objects exhibit significant symmetries or contain repeated substructures. This paper presents a new algorithm that processes geometric models and efficiently discovers and extracts a compact representation of their Euclidean symmetries. These symmetries can be partial, approximate, or both. The method is based on matching simple local shape signatures in pairs and using these matches to accumulate evidence for symmetries in an appropriate transformation space. A clustering stage extracts potential significant symmetries of the object, followed by a verification step. Based on a statistical sampling analysis, we provide theoretical guarantees on the success rate of our algorithm. The extracted symmetry graph representation captures important high-level information about the structure of a geometric model which in turn enables a large set of further processing operations, including shape compression, segmentation, consistent editing, symmetrization, indexing for retrieval, etc."
]
}
|
1309.7472
|
2260540739
|
A comprehensive framework for detection and characterization of overlapping intrinsic symmetry over 3D shapes is proposed. To identify prominent symmetric regions which overlap in space and vary in form, the proposed framework is decoupled into a Correspondence Space Voting procedure followed by a Transformation Space Mapping procedure. In the correspondence space voting procedure, significant symmetries are first detected by identifying surface point pairs on the input shape that exhibit local similarity in terms of their intrinsic geometry while simultaneously maintaining an intrinsic distance structure at a global level. Since different point pairs can share a common point, the detected symmetric shape regions can potentially overlap. To this end, a global intrinsic distance-based voting technique is employed to ensure the inclusion of only those point pairs that exhibit significant symmetry. In the transformation space mapping procedure, the Functional Map framework is employed to generate the final map of symmetries between point pairs. The transformation space mapping procedure ensures the retrieval of the underlying dense correspondence map throughout the 3D shape that follows a particular symmetry. Additionally, the formulation of a novel cost matrix enables the inner product to succesfully indicate the complexity of the underlying symmetry transformation. The proposed transformation space mapping procedure is shown to result in the formulation of a semi-metric symmetry space where each point in the space represents a specific symmetry transformation and the distance between points represents the complexity between the corresponding transformations. Experimental results show that the proposed framework can successfully process complex 3D shapes that possess rich symmetries.
|
There exists a body of published research literature that characterizes shape representations based on the extent of symmetry displayed by an object with respect to multiple transformations. @cite_8 have proposed an extension of Zabrodsky’s symmetry distance to characteristic functions, resulting in a symmetry descriptor that measures the symmetries of an object with respect to all planes and rotations through its center of mass. @cite_21 have extended the symmetry descriptor to define a planar reflective symmetry transform (PRST) that measures reflectional symmetries with respect to all planes through space. @cite_15 have extended the PRST to consider surface point-pair correlations at multiple radii. Although the above representations provide a measure of symmetry for a regularly sampled set of transformations within a group, they are practical only for transformation groups of low dimensionality (for example, rigid body transformations would require one to store a six-dimensional matrix) and break down when faced with groups of higher dimensionality.
|
{
"cite_N": [
"@cite_15",
"@cite_21",
"@cite_8"
],
"mid": [
"2013844129",
"2023507039",
"2045598453"
],
"abstract": [
"Symmetry has been playing an increasing role in 3D shape processing. Recently introduced planar reflective symmetry transform (PRST) has been found useful for canonical coordinate frame determination, shape matching, retrieval, and segmentation. Guided by the intuition that every imperfect symmetry is imperfect in its own way, we investigate the possibility of incorporating more information into symmetry transforms like PRST. As a step in this direction, the concept of augmented symmetry transform is introduced; we obtain a family of symmetry transforms indexed by a parameter. While the original PRST measures how much the symmetry is broken, the augmented PRST also gives some information about how it is broken. Several approaches to calculating the augmented transform are described. We demonstrate that the augmented transform is beneficial for shape retrieval.",
"Symmetry is an important cue for many applications, including object alignment, recognition, and segmentation. In this paper, we describe a planar reflective symmetry transform (PRST) that captures a continuous measure of the reflectional symmetry of a shape with respect to all possible planes. This transform combines and extends previous work that has focused on global symmetries with respect to the center of mass in 3D meshes and local symmetries with respect to points in 2D images. We provide an efficient Monte Carlo sampling algorithm for computing the transform for surfaces and show that it is stable under common transformations. We also provide an iterative refinement algorithm to find local maxima of the transform precisely. We use the transform to define two new geometric properties, center of symmetry and principal symmetry axes, and show that they are useful for aligning objects in a canonical coordinate system. Finally, we demonstrate that the symmetry transform is useful for several applications in computer graphics, including shape matching, segmentation of meshes into parts, and automatic viewpoint selection.",
"Computing reflective symmetries of 2D and 3D shapes is a classical problem in computer vision and computational geometry. Most prior work has focused on finding the main axes of symmetry, or determining that none exists. In this paper we introduce a new reflective symmetry descriptor that represents a measure of reflective symmetry for an arbitrary 3D model for all planes through the model’s center of mass (even if they are not planes of symmetry). The main benefits of this new shape descriptor are that it is defined over a canonical parameterization (the sphere) and describes global properties of a 3D shape. We show how to obtain a voxel grid from arbitrary 3D shapes and, using Fourier methods, we present an algorithm computes the symmetry descriptor in O(N4 log N) time for an N × N × N voxel grid and computes a multiresolution approximation in O(N3 log N) time. In our initial experiments, we have found that the symmetry descriptor is insensitive to noise and stable under point sampling. We have also found that it performs well in shape matching tasks, providing a measure of shape similarity that is orthogonal to existing methods."
]
}
|
1309.7472
|
2260540739
|
A comprehensive framework for detection and characterization of overlapping intrinsic symmetry over 3D shapes is proposed. To identify prominent symmetric regions which overlap in space and vary in form, the proposed framework is decoupled into a Correspondence Space Voting procedure followed by a Transformation Space Mapping procedure. In the correspondence space voting procedure, significant symmetries are first detected by identifying surface point pairs on the input shape that exhibit local similarity in terms of their intrinsic geometry while simultaneously maintaining an intrinsic distance structure at a global level. Since different point pairs can share a common point, the detected symmetric shape regions can potentially overlap. To this end, a global intrinsic distance-based voting technique is employed to ensure the inclusion of only those point pairs that exhibit significant symmetry. In the transformation space mapping procedure, the Functional Map framework is employed to generate the final map of symmetries between point pairs. The transformation space mapping procedure ensures the retrieval of the underlying dense correspondence map throughout the 3D shape that follows a particular symmetry. Additionally, the formulation of a novel cost matrix enables the inner product to succesfully indicate the complexity of the underlying symmetry transformation. The proposed transformation space mapping procedure is shown to result in the formulation of a semi-metric symmetry space where each point in the space represents a specific symmetry transformation and the distance between points represents the complexity between the corresponding transformations. Experimental results show that the proposed framework can successfully process complex 3D shapes that possess rich symmetries.
|
The exists a class of techniques that exploits the redundancy in repeating structures to robustly detect symmetries @cite_13 ; @cite_27 ; @cite_1 ; @cite_7 ; @cite_9 . The transformation space voting method proposed by @cite_30 is extended in @cite_7 by fitting parameters of a transformation generator to optimally register the clusters in transformation space. @cite_5 and @cite_13 have taken a similar approach using subgraph matching of feature points and feature lines, respectively, to establish potential correspondences between repeated structures. This is followed by an iterative closest points (ICP) algorithm to simultaneously grow corresponding regions and refine matches over all detected patterns, allowing the detection of repeated patterns even in noisy data @cite_7 , but at the cost of requiring a-priori knowledge of the commutative group expected in the data. Also, the non-linear local optimization procedure within the ICP algorithm could cause it to get trapped in a local minimum if the initialization is not good enough.
|
{
"cite_N": [
"@cite_30",
"@cite_7",
"@cite_9",
"@cite_1",
"@cite_27",
"@cite_5",
"@cite_13"
],
"mid": [
"2060206980",
"2086984226",
"2162973272",
"1694042583",
"2161883903",
"2124574189",
"1583417382"
],
"abstract": [
"\"Symmetry is a complexity-reducing concept [...]; seek it every-where.\" - Alan J. PerlisMany natural and man-made objects exhibit significant symmetries or contain repeated substructures. This paper presents a new algorithm that processes geometric models and efficiently discovers and extracts a compact representation of their Euclidean symmetries. These symmetries can be partial, approximate, or both. The method is based on matching simple local shape signatures in pairs and using these matches to accumulate evidence for symmetries in an appropriate transformation space. A clustering stage extracts potential significant symmetries of the object, followed by a verification step. Based on a statistical sampling analysis, we provide theoretical guarantees on the success rate of our algorithm. The extracted symmetry graph representation captures important high-level information about the structure of a geometric model which in turn enables a large set of further processing operations, including shape compression, segmentation, consistent editing, symmetrization, indexing for retrieval, etc.",
"We introduce a computational framework for discovering regular or repeated geometric structures in 3D shapes. We describe and classify possible regular structures and present an effective algorithm for detecting such repeated geometric patterns in point- or meshbased models. Our method assumes no prior knowledge of the geometry or spatial location of the individual elements that define the pattern. Structure discovery is made possible by a careful analysis of pairwise similarity transformations that reveals prominent lattice structures in a suitable model of transformation space. We introduce an optimization method for detecting such uniform grids specifically designed to deal with outliers and missing elements. This yields a robust algorithm that successfully discovers complex regular structures amidst clutter, noise, and missing geometry. The accuracy of the extracted generating transformations is further improved using a novel simultaneous registration method in the spatial domain. We demonstrate the effectiveness of our algorithm on a variety of examples and show applications to compression, model repair, and geometry synthesis.",
"In this paper, we present a new geometry compression technique particularly suitable for 3D mesh models of engineering class ‐ architectural models, machine plants, factories, etc. We observe that such models have a number of repeating features at various levels of granularity. In most of the widely available models in this class, the geometric description of such features are also repeated. A major distinctive aspect of our compression technique is that repeating geometric features in the geometric model are automatically discovered and then compactly encoded. The proposed method discovers repetition first at the connected component level and then at the subcomponent level across components and also at the aggregate component level. The results from a straight forward implementation tried on large mesh models downloaded from the net are extremely encouraging.",
"Decorative reliefs are widely used for e.g. packaging and porcelain design. In periodic reliefs, the relief repeats a pattern, for example all the way around an underlying surface of revolution. Reverseengineering of existing reliefs allows them to be re-applied to different base surfaces; we show here how to segment a single repeat unit of a periodic relief starting from a scanned triangle mesh. We first briefly review how we segment the relief from the background surface using our previous work. The rest of the paper then concentrates on how we extract a single repeat unit from the relief. To do so, the user provides two points on one relief boundary which are in approximate correspondence on consecutive repeats of the relief. We first refine the relative locations of these points, and then determine a third corresponding point using relief boundary information. These are used to determine three initial cutting planes across the relief. Then surface registration strategies are utilised to refine the correspondence between adjacent repeat units. Finally, we refine the exact locations of the cutting planes by considering only surface information close to the cutting planes. This allows a repeat unit of the periodic relief to be extracted. We demonstrate that our algorithm is successful and practical, using various real scanned models: user input can be quite imprecise, and we can cope with hand-made reliefs in which the pattern units are only approximately copies of each other.",
"Approximate geometric models, e.g. as created by reverse engineering, describe the approximate shape of an object, but do not record the underlying design intent. Automatically inferring geometric aspects of the design intent, represented by feature trees and geometric constraints, enhances the utility of such models for downstream tasks. One approach to design intent detection in such models is to decompose them into regularity features. Geometric regularities such as symmetries may then be sought in each regularity feature, and subsequently be combined into a global, consistent description of the model's geometric design intent. This paper describes a systematic approach for finding such regularity features based on recovering broken symmetries in the model. The output is a tree of regularity features for subsequent use in regularity detection and selection. Experimental results are given to demonstrate the operation and efficiency of the algorithm.",
"Symmetry detection aims at discovering redundancy in the form of reoccurring structures in geometric objects. In this paper, we present a new symmetry detection algorithm for geometry represented as point clouds that is based on analyzing a graph of surface features. We combine a general feature detection scheme with a RANSAC-based randomized subgraph searching algorithm in order to reliably detect reoccurring patterns of locally unique structures. A subsequent segmentation step based on a simultaneous region growing variant of the ICP algorithm is applied to verify that the actual point cloud data supports the pattern detected in the feature graphs. We apply our algorithm to synthetic and real-world 3D scanner data sets, demonstrating robust symmetry detection results in the presence of scanning artifacts and noise. The modular and flexible nature of the graph-based detection scheme allows for easy generalizations of the algorithm, which we demonstrate by applying the same technique to other data modalities such as images or triangle meshes.",
""
]
}
|
1309.7472
|
2260540739
|
A comprehensive framework for detection and characterization of overlapping intrinsic symmetry over 3D shapes is proposed. To identify prominent symmetric regions which overlap in space and vary in form, the proposed framework is decoupled into a Correspondence Space Voting procedure followed by a Transformation Space Mapping procedure. In the correspondence space voting procedure, significant symmetries are first detected by identifying surface point pairs on the input shape that exhibit local similarity in terms of their intrinsic geometry while simultaneously maintaining an intrinsic distance structure at a global level. Since different point pairs can share a common point, the detected symmetric shape regions can potentially overlap. To this end, a global intrinsic distance-based voting technique is employed to ensure the inclusion of only those point pairs that exhibit significant symmetry. In the transformation space mapping procedure, the Functional Map framework is employed to generate the final map of symmetries between point pairs. The transformation space mapping procedure ensures the retrieval of the underlying dense correspondence map throughout the 3D shape that follows a particular symmetry. Additionally, the formulation of a novel cost matrix enables the inner product to succesfully indicate the complexity of the underlying symmetry transformation. The proposed transformation space mapping procedure is shown to result in the formulation of a semi-metric symmetry space where each point in the space represents a specific symmetry transformation and the distance between points represents the complexity between the corresponding transformations. Experimental results show that the proposed framework can successfully process complex 3D shapes that possess rich symmetries.
|
@cite_20 have proposed an eigen-analysis technique for symmetry detection that relies on spectral clustering. The top eigenvectors of their geometric similarity-based SCM characterize the symmetry-defining orbits, where each orbit includes all points that are symmetric with one another. However, their work is not suited for multi-scale partial symmetry detection. First, expressing local point similarities as symmetry invariants is only appropriate for global intrinsic symmetry detection. In the case of partial symmetry detection, it is not always possible to reliably judge if two surface points are symmetric by comparing only their point (i.e., local) signatures, especially when one point lies on the boundary of symmetric regions. Moreover, their single-stage clustering procedure is unable to identify overlapping symmetries. @cite_25 have extended the eigen-analysis approach of @cite_20 by incorporating the concept of global intrinsic distance-based symmetry support accompanied by a 2-stage spectral clustering procedure to distinguish between scale detection and symmetry detection. Although they showed some interesting results, the 2-stage spectral clustering procedure made their method extremely slow. Furthermore, the absence of transformation map retrieval meant that further processing of the detected symmetries, which are represented as point pairs, was extremely inefficient.
|
{
"cite_N": [
"@cite_25",
"@cite_20"
],
"mid": [
"2045323633",
"2077031952"
],
"abstract": [
"We present an algorithm for multi-scale partial intrinsic symmetry detection over 2D and 3D shapes, where the scale of a symmetric region is defined by intrinsic distances between symmetric points over the region. To identify prominent symmetric regions which overlap and vary in form and scale, we decouple scale extraction and symmetry extraction by performing two levels of clustering. First, significant symmetry scales are identified by clustering sample point pairs from an input shape. Since different point pairs can share a common point, shape regions covered by points in different scale clusters can overlap. We introduce the symmetry scale matrix (SSM), where each entry estimates the likelihood two point pairs belong to symmetries at the same scale. The pair-to-pair symmetry affinity is computed based on a pair signature which encodes scales. We perform spectral clustering using the SSM to obtain the scale clusters. Then for all points belonging to the same scale cluster, we perform the second-level spectral clustering, based on a novel point-to-point symmetry affinity measure, to extract partial symmetries at that scale. We demonstrate our algorithm on complex shapes possessing rich symmetries at multiple scales.",
"We introduce the Symmetry Factored Embedding (SFE) and the Symmetry Factored Distance (SFD) as new tools to analyze and represent symmetries in a point set. The SFE provides new coordinates in which symmetry is \"factored out,\" and the SFD is the Euclidean distance in that space. These constructions characterize the space of symmetric correspondences between points -- i.e., orbits. A key observation is that a set of points in the same orbit appears as a clique in a correspondence graph induced by pairwise similarities. As a result, the problem of finding approximate and partial symmetries in a point set reduces to the problem of measuring connectedness in the correspondence graph, a well-studied problem for which spectral methods provide a robust solution. We provide methods for computing the SFE and SFD for extrinsic global symmetries and then extend them to consider partial extrinsic and intrinsic cases. During experiments with difficult examples, we find that the proposed methods can characterize symmetries in inputs with noise, missing data, non-rigid deformations, and complex symmetries, without a priori knowledge of the symmetry group. As such, we believe that it provides a useful tool for automatic shape analysis in applications such as segmentation and stationary point detection."
]
}
|
1309.7472
|
2260540739
|
A comprehensive framework for detection and characterization of overlapping intrinsic symmetry over 3D shapes is proposed. To identify prominent symmetric regions which overlap in space and vary in form, the proposed framework is decoupled into a Correspondence Space Voting procedure followed by a Transformation Space Mapping procedure. In the correspondence space voting procedure, significant symmetries are first detected by identifying surface point pairs on the input shape that exhibit local similarity in terms of their intrinsic geometry while simultaneously maintaining an intrinsic distance structure at a global level. Since different point pairs can share a common point, the detected symmetric shape regions can potentially overlap. To this end, a global intrinsic distance-based voting technique is employed to ensure the inclusion of only those point pairs that exhibit significant symmetry. In the transformation space mapping procedure, the Functional Map framework is employed to generate the final map of symmetries between point pairs. The transformation space mapping procedure ensures the retrieval of the underlying dense correspondence map throughout the 3D shape that follows a particular symmetry. Additionally, the formulation of a novel cost matrix enables the inner product to succesfully indicate the complexity of the underlying symmetry transformation. The proposed transformation space mapping procedure is shown to result in the formulation of a semi-metric symmetry space where each point in the space represents a specific symmetry transformation and the distance between points represents the complexity between the corresponding transformations. Experimental results show that the proposed framework can successfully process complex 3D shapes that possess rich symmetries.
|
The proposed scheme is decoupled into two steps of Correspondence Space Voting and Transformation Space Mapping . The Correspondence Space Voting technique is inspired from the work of @cite_25 , but in our technique, we bypassed the 2 particularly lengthy steps of spectral clustering and the all-pair geodesic distance calculation to improve the running time quite significantly. Moreover, our introduction of Transformation Space Mapping in symmetry detection is quite novel in the sense that this not only provides a concise description of the underlying symmetry transformation, but according to our knowledge this is one of the first works which has the unique ability of characterizing the symmetric transformation.
|
{
"cite_N": [
"@cite_25"
],
"mid": [
"2045323633"
],
"abstract": [
"We present an algorithm for multi-scale partial intrinsic symmetry detection over 2D and 3D shapes, where the scale of a symmetric region is defined by intrinsic distances between symmetric points over the region. To identify prominent symmetric regions which overlap and vary in form and scale, we decouple scale extraction and symmetry extraction by performing two levels of clustering. First, significant symmetry scales are identified by clustering sample point pairs from an input shape. Since different point pairs can share a common point, shape regions covered by points in different scale clusters can overlap. We introduce the symmetry scale matrix (SSM), where each entry estimates the likelihood two point pairs belong to symmetries at the same scale. The pair-to-pair symmetry affinity is computed based on a pair signature which encodes scales. We perform spectral clustering using the SSM to obtain the scale clusters. Then for all points belonging to the same scale cluster, we perform the second-level spectral clustering, based on a novel point-to-point symmetry affinity measure, to extract partial symmetries at that scale. We demonstrate our algorithm on complex shapes possessing rich symmetries at multiple scales."
]
}
|
1309.6978
|
2952377049
|
In this work, we study protocols so that populations of distributed processes can construct networks. In order to highlight the basic principles of distributed network construction we keep the model minimal in all respects. In particular, we assume finite-state processes that all begin from the same initial state and all execute the same protocol (i.e. the system is homogeneous). Moreover, we assume pairwise interactions between the processes that are scheduled by an adversary. The only constraint on the adversary scheduler is that it must be fair. In order to allow processes to construct networks, we let them activate and deactivate their pairwise connections. When two processes interact, the protocol takes as input the states of the processes and the state of the their connection and updates all of them. Initially all connections are inactive and the goal is for the processes, after interacting and activating deactivating connections for a while, to end up with a desired stable network. We give protocols (optimal in some cases) and lower bounds for several basic network construction problems such as spanning line, spanning ring, spanning star, and regular network. We provide proofs of correctness for all of our protocols and analyze the expected time to convergence of most of them under a uniform random scheduler that selects the next pair of interacting processes uniformly at random from all such pairs. Finally, we prove several universality results by presenting generic protocols that are capable of simulating a Turing Machine (TM) and exploiting it in order to construct a large class of networks.
|
Nature has an intrinsic ability to form complex structures and networks via a process known as . By self-assembly, small components (like e.g. molecules) automatically assemble into large, and usually complex structures (like e.g. a crystal). There is an abundance of such examples in the physical world. Lipid molecules form a cell's membrane, ribosomal proteins and RNA coalesce into functional ribosomes, and bacteriophage virus proteins self-assemble a capsid that allows the virus to invade bacteria @cite_19 . Mixtures of RNA fragments that self-assemble into self-replicating ribozymes spontaneously form cooperative catalytic cycles and networks. Such cooperative networks grow faster than selfish autocatalytic cycles indicating an intrinsic ability of RNA populations to evolve greater complexity through cooperation @cite_12 . Through billions of years of prebiotic molecular selection and evolution, nature has produced a basic set of molecules. By combining these simple elements, natural processes are capable of fashioning an enormously diverse range of fabrication units, which can further self-organize into refined structures, materials and molecular machines that not only have high precision, flexibility and error-correction capacity, but are also self-sustaining and evolving. In fact, nature shows a strong preference for bottom-up design.
|
{
"cite_N": [
"@cite_19",
"@cite_12"
],
"mid": [
"2044709436",
"2021492857"
],
"abstract": [
"Self-assembly is the process by which small components automatically assemble themselves into large, complex structures. Examples in nature abound: lipids self-assemble a cell's membrane, and bacteriophage virus proteins self-assemble a capsid that allows the virus to invade other bacteria. Even a phenomenon as simple as crystal formation is a process of self-assembly. How could such a process be described as \"algorithmic?\" The key word in the first sentence is automatically. Algorithms automate a series of simple computational tasks. Algorithmic self-assembly systems automate a series of simple growth tasks, in which the object being grown is simultaneously the machine controlling its own growth.",
"In models of early life it has been suggested that life and evolution would be more easily achieved if RNA molecules could interact, rather than function independently; here an in vitro system is designed with several RNA fragments that can assemble into a ribozyme, showing that cooperative networks formed by these fragments outcompete self-catalytic RNA fragments."
]
}
|
1309.6978
|
2952377049
|
In this work, we study protocols so that populations of distributed processes can construct networks. In order to highlight the basic principles of distributed network construction we keep the model minimal in all respects. In particular, we assume finite-state processes that all begin from the same initial state and all execute the same protocol (i.e. the system is homogeneous). Moreover, we assume pairwise interactions between the processes that are scheduled by an adversary. The only constraint on the adversary scheduler is that it must be fair. In order to allow processes to construct networks, we let them activate and deactivate their pairwise connections. When two processes interact, the protocol takes as input the states of the processes and the state of the their connection and updates all of them. Initially all connections are inactive and the goal is for the processes, after interacting and activating deactivating connections for a while, to end up with a desired stable network. We give protocols (optimal in some cases) and lower bounds for several basic network construction problems such as spanning line, spanning ring, spanning star, and regular network. We provide proofs of correctness for all of our protocols and analyze the expected time to convergence of most of them under a uniform random scheduler that selects the next pair of interacting processes uniformly at random from all such pairs. Finally, we prove several universality results by presenting generic protocols that are capable of simulating a Turing Machine (TM) and exploiting it in order to construct a large class of networks.
|
Systems and solutions inspired by nature have often turned out to be extremely practical and efficient. For example, the bottom-up approach of nature inspires the fabrication of biomaterials by attempting to mimic these phenomena with the aim of creating new and varied structures with novel utilities well beyond the gifts of nature @cite_7 . Moreover, there is already a remarkable amount of work envisioning our future ability to engineer computing and robotic systems by manipulating molecules with nanoscale precision. Ambitious long-term applications include molecular computers @cite_16 and miniature (nano)robots for surgical instrumentation, diagnosis and drug delivery in medical applications (e.g. it has very recently been reported that DNA nanorobots could even kill cancer cells @cite_23 ) and monitoring in extreme conditions (e.g. in toxic environments). However, the road towards this vision passes first through our ability to discover . The gain of developing such a theory will be twofold: It will give some insight to the role (and the mechanisms) of network formation in the complexity of natural processes and it will allow us engineer artificial systems that achieve this complexity.
|
{
"cite_N": [
"@cite_16",
"@cite_23",
"@cite_7"
],
"mid": [
"1999817784",
"1976815031",
"2006225923"
],
"abstract": [
"The processors of most computers work in series, performing one instruction at a time. This limits their ability to perform certain types of tasks in a reasonable period. An approach based on arrays of simultaneously interacting molecular switches could enable previously intractable computational problems to be solved.",
"We describe an autonomous DNA nanorobot capable of transporting molecular payloads to cells, sensing cell surface inputs for conditional, triggered activation, and reconfiguring its structure for payload delivery. The device can be loaded with a variety of materials in a highly organized fashion and is controlled by an aptamer-encoded logic gate, enabling it to respond to a wide array of cues. We implemented several different logical AND gates and demonstrate their efficacy in selective regulation of nanorobot function. As a proof of principle, nanorobots loaded with combinations of antibody fragments were used in two different types of cell-signaling stimulation in tissue culture. Our prototype could inspire new designs with different selectivities and biologically active payloads for cell-targeting tasks.",
"Two complementary strategies can be used in the fabrication of molecular biomaterials. In the 'top-down' approach, biomaterials are generated by stripping down a complex entity into its component parts (for example, paring a virus particle down to its capsid to form a viral cage). This contrasts with the 'bottom-up' approach, in which materials are assembled molecule by molecule (and in some cases even atom by atom) to produce novel supramolecular architectures. The latter approach is likely to become an integral part of nanomaterials manufacture and requires a deep understanding of individual molecular building blocks and their structures, assembly properties and dynamic behaviors. Two key elements in molecular fabrication are chemical complementarity and structural compatibility, both of which confer the weak and noncovalent interactions that bind building blocks together during self-assembly. Using natural processes as a guide, substantial advances have been achieved at the interface of nanomaterials and biology, including the fabrication of nanofiber materials for three-dimensional cell culture and tissue engineering, the assembly of peptide or protein nanotubes and helical ribbons, the creation of living microlenses, the synthesis of met al nanowires on DNA templates, the fabrication of peptide, protein and lipid scaffolds, the assembly of electronic materials by bacterial phage selection, and the use of radiofrequency to regulate molecular behaviors."
]
}
|
1309.7001
|
2951276839
|
The Kaczmarz method is an iterative method for solving large systems of equations that projects iterates orthogonally onto the solution space of each equation. In contrast to direct methods such as Gaussian elimination or QR-factorization, this algorithm is efficient for problems with sparse matrices, as they appear in constraint-based user interface (UI) layout specifications. However, the Kaczmarz method as described in the literature has its limitations: it considers only equality constraints and does not support soft constraints, which makes it inapplicable to the UI layout problem. In this paper we extend the Kaczmarz method for solving specifications containing soft constraints, using the prioritized IIS detection algorithm. Furthermore, the performance and convergence of the proposed algorithms are evaluated empirically using randomly generated UI layout specifications of various sizes. The results show that these methods offer improvements in performance over standard methods like Matlab's LINPROG, a well-known efficient linear programming solver.
|
Different direct and iterative methods exist which can solve least-squares problems. Examples of direct methods are QR-factorization @cite_26 and normal equations @cite_50 . Iterative methods that are used to solve linear systems include simplex @cite_41 , revised simplex @cite_46 , conjugate gradient @cite_58 , generalized minimal residual @cite_40 , and others @cite_35 . In order to solve over-determined systems in a least-squares sense, Herman adds some additional constraints and variables, similar to slack variables in the simplex approach @cite_1 . The resulting method converges towards the least-squares solution. Popa analyzed a similar approach for solving least-squares problems @cite_16 @cite_17 . Censor showed that if the relaxation parameter goes to zero then the Kaczmarz method converges to a weighted least-squares solution for inconsistent systems @cite_55 .
|
{
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_41",
"@cite_55",
"@cite_1",
"@cite_16",
"@cite_40",
"@cite_50",
"@cite_46",
"@cite_58",
"@cite_17"
],
"mid": [
"1546004968",
"1485446097",
"",
"",
"",
"2010440992",
"2140153041",
"",
"1968902264",
"2316564661",
"1967981081"
],
"abstract": [
"Preface Acknowledgements 1. Direct solution methods 2. Theory of matrix eigenvalues 3. Positive definite matrices, Schur complements, and generalized eigenvalue problems 4. Reducible and irreducible matrices and the Perron-Frobenius theory for nonnegative matrices 5. Basic iterative methods and their rates of convergence 6. M-matrices, convergent splittings, and the SOR method 7. Incomplete factorization preconditioning methods 8. Approximate matrix inverses and corresponding preconditioning methods 9. Block diagonal and Schur complement preconditionings 10. Estimates of eigenvalues and condition numbers for preconditional matrices 11. Conjugate gradient and Lanczos-type methods 12. Generalized conjugate gradient methods 13. The rate of convergence of the conjugate gradient method Appendices.",
"1. A Review of Some Required Concepts from Core Linear Algebra. 2. Floating Point Numbers and Errors in Computations. 3. Stability of Algorithms and Conditioning of Problems. 4. Numerically Effective Algorithms and Mathematical Software. 5. Some Useful Transformations in Numerical Linear Algebra and Their Applications. 6. Numerical Solutions of Linear Systems. 7. Least Squares Solutions to Linear Systems. 8. Numerical Matrix Eigenvalue Problems. 9. The Generalized Eigenvalue Problem. 10. The Singular Value Decomposition (SVD). 11. A Taste of Round-Off Error Analysis. Appendix A: A Brief Introduction to MATLAB. Appendix B: MATLAB and Selected MATLAB Programs.",
"",
"",
"",
"For numerical computation of the minimal Euclidean norm (least-squares) solution of overdetermined linear systems, usually direct solvers are used (like QR decomposition, see [4]). The iterative methods for such kind of problems need special assumptions about the system (consistency, full rank of the system matrix, some parameters they use or they give not the minimal length solution, [2,3,5,8,10,13]). In the present paper we purpose two iterative algorithms which generate sequences convergent to the minimal Euclidean length solution in the general case (inconsistent system and rank deficient matrix). The algorithms use only some combinations and properties of the well-known Kaczmarz iterative method ([13]) and need no special assumptions about the system.",
"We present an iterative method for solving linear systems, which has the property of minimizing at every step the norm of the residual vector over a Krylov subspace. The algorithm is derived from t...",
"",
"1. Overview of Operations Research. I. DETERMINISTIC MODELS. 2. Introduction to Linear Programming. 3. The Simplex Method. 4. Duality and Sensitivity Analysis. 5. Transportation Model and Its Variants. 6. Network Models. 7. Advanced Linear Programming. 8. Goal Programming. 9. Integer Linear Programming. 10. Deterministic Dynamic Programming. 11. Deterministic Inventory Models. II. PROBABILISTIC MODELS. 12. Review of Basic Probability. 13. Forecasting Models. 14. Decision Analysis and Games. 15. Probabilistic Dynamic Programming. 16. Probabilistic Inventory Models. 17. Queueing Systems. 18. Simulation Modeling. 19. Markovian Decision Process. III. NONLINEAR MODELS. 20. Classical Optimization Theory. 21. Nonlinear Programming Algorithms. Appendix A: Review of Matrix Algebra. Appendix B: Introduction to Simnet II. Appendix C: Tora and Simnet II Installation and Execution. Appendix D: Statistical Tables. Appendix E: Answers to Odd-Numbered Problems. Index.",
"An iterative algorithm is given for solving a system Ax=k of n linear equations in n unknowns. The solution is given in n steps. It is shown that this method is a special case of a very general method which also includes Gaussian elimination. These general algorithms are essentially algorithms for finding an n dimensional ellipsoid. Connections are made with the theory of orthogonal polynomials and continued fractions.",
"The algebraic reconstruction technique (ART), based on the well known algorithm proposed by Kaczmarz in 1937, is one of the most important class of solution methods for image reconstruction problems. But unfortunately, almost all the methods from the ART class give satisfactory results only in the case of consistent problems. In the inconsistent case (and unfortunately this is what happens in real applications, because of measurement errors) they give only more or less \"controllable\" versions of the exact solutions. This is exactly the case that we analyze in the present paper. We start with a theoretical analysis of the classical Kaczmarz's projection method in the case of an inconsistent linear least-squares problem and we prove that the approximations so obtained are at a certain distance from the set of exact least-squares solutions. This distance is controlled by the component of the right hand side of the problem lying in the orthogonal complement of the range of problem's matrix, i.e. exactly the component that makes the problem inconsistent. For overcoming this difficulty we consider an extended version of Kaczmarz's algorithm, previously analyzed by one of the authors. In the numerical experiments described in the last part of the paper we compare the above mentioned extension with two well known (ART) type algorithms for image reconstruction in two electromagnetic geotomography problems. The results indicate that the extended Kaczmarz algorithm gives much better results than the other two."
]
}
|
1309.7001
|
2951276839
|
The Kaczmarz method is an iterative method for solving large systems of equations that projects iterates orthogonally onto the solution space of each equation. In contrast to direct methods such as Gaussian elimination or QR-factorization, this algorithm is efficient for problems with sparse matrices, as they appear in constraint-based user interface (UI) layout specifications. However, the Kaczmarz method as described in the literature has its limitations: it considers only equality constraints and does not support soft constraints, which makes it inapplicable to the UI layout problem. In this paper we extend the Kaczmarz method for solving specifications containing soft constraints, using the prioritized IIS detection algorithm. Furthermore, the performance and convergence of the proposed algorithms are evaluated empirically using randomly generated UI layout specifications of various sizes. The results show that these methods offer improvements in performance over standard methods like Matlab's LINPROG, a well-known efficient linear programming solver.
|
Most of the research related to GUI layout involves various algorithms for solving constraint hierarchies. Research related to constraint-based UI layout has provided results in the form of tools @cite_22 @cite_45 and algorithms @cite_28 @cite_13 for specific tasks. Our work is concerned with two different aspects. We must find a solution for linear inequality constraints with iterative methods, while also being able to handle soft constraints. We will discuss related work for both aspects in turn.
|
{
"cite_N": [
"@cite_28",
"@cite_45",
"@cite_13",
"@cite_22"
],
"mid": [
"2073536284",
"",
"2016321606",
"1993924106"
],
"abstract": [
"Linear equality and inequality constraints arise naturally in specifying many aspects of user interfaces, such as requiring that one window be to the left of another, requiring that a pane occupy the leftmost third of a window, or preferring that an object be contained within a rectangle if possible. Previous constraint solvers designed for user interface applications cannot handle simultaneous linear equations and inequalities efficiently. This is a major limitation, as such systems of constraints arise often in natural declarative specifications. We describe Cassowary---an incremental algorithm based on the dual simplex method, which can solve such systems of constraints efficiently. We have implemented the algorithm as part of a constraint-solving toolkit. We discuss the implementation of the toolkit, its application programming interface, and its performance.",
"",
"Linear equality and inequality constraints arise naturally in specifying many aspects of user interfaces, such as requiring that onewindowbe to the left of another, requiring that a pane occupy the leftmost 1 3 of a window, or preferring that an object be contained within a rectangle if possible. Current constraint solvers designed for UI applications cannot efficiently handle simultaneous linear equations and inequalities. This is amajor limitation. We describe incremental algorithms based on the dual simplex and active set methods that can solve such systems of constraints efficiently.",
"We propose a scalable algorithm called HiRise2 for incrementally solving soft linear constraints over real domains. It is based on a framework for soft constraints, known as constraint hierarchies, to allow effective modeling of user interface applications by using hierarchical preferences for constraints. HiRise2 introduces LU decompositions to improve the scalability of an incremental simplex method. Using this algorithm, we implemented a constraint solver. We also show the results of experiments on the performance of the solver."
]
}
|
1309.7001
|
2951276839
|
The Kaczmarz method is an iterative method for solving large systems of equations that projects iterates orthogonally onto the solution space of each equation. In contrast to direct methods such as Gaussian elimination or QR-factorization, this algorithm is efficient for problems with sparse matrices, as they appear in constraint-based user interface (UI) layout specifications. However, the Kaczmarz method as described in the literature has its limitations: it considers only equality constraints and does not support soft constraints, which makes it inapplicable to the UI layout problem. In this paper we extend the Kaczmarz method for solving specifications containing soft constraints, using the prioritized IIS detection algorithm. Furthermore, the performance and convergence of the proposed algorithms are evaluated empirically using randomly generated UI layout specifications of various sizes. The results show that these methods offer improvements in performance over standard methods like Matlab's LINPROG, a well-known efficient linear programming solver.
|
The Cassowary solver @cite_28 can also handle linear inequalities. It uses the simplex algorithm, and inequalities are solved by introducing slack variables. QOCA @cite_5 intends to overcome the difficulties in maximizing the efficiency and facilitating the re-use of the solver in other applications. This solver introduces slack variables to convert inequality constraints into equality constraints in a similar way to the Cassowary solver. The HiRise constraint solver @cite_33 resolves both equality and inequality constraints in combination with quasi-linear optimization.
|
{
"cite_N": [
"@cite_28",
"@cite_5",
"@cite_33"
],
"mid": [
"2073536284",
"2611138713",
""
],
"abstract": [
"Linear equality and inequality constraints arise naturally in specifying many aspects of user interfaces, such as requiring that one window be to the left of another, requiring that a pane occupy the leftmost third of a window, or preferring that an object be contained within a rectangle if possible. Previous constraint solvers designed for user interface applications cannot handle simultaneous linear equations and inequalities efficiently. This is a major limitation, as such systems of constraints arise often in natural declarative specifications. We describe Cassowary---an incremental algorithm based on the dual simplex method, which can solve such systems of constraints efficiently. We have implemented the algorithm as part of a constraint-solving toolkit. We discuss the implementation of the toolkit, its application programming interface, and its performance.",
"We describe an object-oriented constraint solving toolkit, QOCA, designed for interactive graphical applications. It has a simple yet powerful interface based on the metric space model for constraint manipulation. Currently QOCA supports linear arithmetic constraints and two different metrics: the square of the Euclidean distance and the Manhattan distance. It provides three solvers, all of which rely on keeping the constraints in solved form and relies on novel algorithms for efficient resolving of constraints during direct manipulation. We provide a thorough empirical evaluation of QOCA, both of the interface design and the speed of constraint solving.",
""
]
}
|
1309.7001
|
2951276839
|
The Kaczmarz method is an iterative method for solving large systems of equations that projects iterates orthogonally onto the solution space of each equation. In contrast to direct methods such as Gaussian elimination or QR-factorization, this algorithm is efficient for problems with sparse matrices, as they appear in constraint-based user interface (UI) layout specifications. However, the Kaczmarz method as described in the literature has its limitations: it considers only equality constraints and does not support soft constraints, which makes it inapplicable to the UI layout problem. In this paper we extend the Kaczmarz method for solving specifications containing soft constraints, using the prioritized IIS detection algorithm. Furthermore, the performance and convergence of the proposed algorithms are evaluated empirically using randomly generated UI layout specifications of various sizes. The results show that these methods offer improvements in performance over standard methods like Matlab's LINPROG, a well-known efficient linear programming solver.
|
All constraint solvers for UI layout must support over-determined systems. The commonly used techniques for dealing with over-determined problems are weighted constraints and constraint hierarchies @cite_42 @cite_11 . Weighted constraints are typically used with some general forms of direct methods, while constraint hierarchies are especially utilized in linear programming based algorithms. Many UI layout solvers are based on linear programming and support soft constraints using slack variables in the objective function @cite_28 @cite_13 @cite_5 @cite_6 .
|
{
"cite_N": [
"@cite_28",
"@cite_42",
"@cite_6",
"@cite_5",
"@cite_13",
"@cite_11"
],
"mid": [
"2073536284",
"1504034731",
"",
"2611138713",
"2016321606",
"1740773753"
],
"abstract": [
"Linear equality and inequality constraints arise naturally in specifying many aspects of user interfaces, such as requiring that one window be to the left of another, requiring that a pane occupy the leftmost third of a window, or preferring that an object be contained within a rectangle if possible. Previous constraint solvers designed for user interface applications cannot handle simultaneous linear equations and inequalities efficiently. This is a major limitation, as such systems of constraints arise often in natural declarative specifications. We describe Cassowary---an incremental algorithm based on the dual simplex method, which can solve such systems of constraints efficiently. We have implemented the algorithm as part of a constraint-solving toolkit. We discuss the implementation of the toolkit, its application programming interface, and its performance.",
"We summarize existing approaches to model and solve overconstrained problems. These problems are usually formulated as combinatorial optimization problems, and different specific and generic formalisms are discussed, including the special case of multi-objective optimization. Regarding solving methods, both systematic and local search approaches are considered. Finally we review a number of case studies on overconstrained problems taken from the specialized literature.",
"",
"We describe an object-oriented constraint solving toolkit, QOCA, designed for interactive graphical applications. It has a simple yet powerful interface based on the metric space model for constraint manipulation. Currently QOCA supports linear arithmetic constraints and two different metrics: the square of the Euclidean distance and the Manhattan distance. It provides three solvers, all of which rely on keeping the constraints in solved form and relies on novel algorithms for efficient resolving of constraints during direct manipulation. We provide a thorough empirical evaluation of QOCA, both of the interface design and the speed of constraint solving.",
"Linear equality and inequality constraints arise naturally in specifying many aspects of user interfaces, such as requiring that onewindowbe to the left of another, requiring that a pane occupy the leftmost 1 3 of a window, or preferring that an object be contained within a rectangle if possible. Current constraint solvers designed for UI applications cannot efficiently handle simultaneous linear equations and inequalities. This is amajor limitation. We describe incremental algorithms based on the dual simplex and active set methods that can solve such systems of constraints efficiently.",
"Constraint hierarchies provide a framework for soft constraints, and have been applied to areas such as artificial intelligence, logic programming, and user interfaces. In this framework, constraints are associated with hierarchical preferences or priorities called strengths, and may be relaxed if they conflict with stronger constraints. To utilize constraint hierarchies, researchers have designed and implemented various practical constraint satisfaction algorithms. Although existing algorithms can be categorized into several approaches, what kinds of algorithms are possible has been unclear from a more general viewpoint. In this paper, we propose a novel theory called generalized local propagation as a foundation of algorithms for solving constraint hierarchies. This theory formalizes a way to express algorithms as constraint scheduling, and presents theorems that support possible approaches. A benefit of this theory is that it covers algorithms using constraint hierarchy solution criteria known as global comparators, for which only a small number of algorithms have been implemented. With this theory, we provide a new classification of solution criteria based on their difficulties in constraint satisfaction. We also discuss how existing algorithms are related to our theory, which will be helpful in designing new algorithms."
]
}
|
1309.7001
|
2951276839
|
The Kaczmarz method is an iterative method for solving large systems of equations that projects iterates orthogonally onto the solution space of each equation. In contrast to direct methods such as Gaussian elimination or QR-factorization, this algorithm is efficient for problems with sparse matrices, as they appear in constraint-based user interface (UI) layout specifications. However, the Kaczmarz method as described in the literature has its limitations: it considers only equality constraints and does not support soft constraints, which makes it inapplicable to the UI layout problem. In this paper we extend the Kaczmarz method for solving specifications containing soft constraints, using the prioritized IIS detection algorithm. Furthermore, the performance and convergence of the proposed algorithms are evaluated empirically using randomly generated UI layout specifications of various sizes. The results show that these methods offer improvements in performance over standard methods like Matlab's LINPROG, a well-known efficient linear programming solver.
|
Most of the direct methods for soft constraint problems are least-squares methods such as LU-decompos -ition and QR-decompo -sition @cite_30 . The UI layout solver HiRise @cite_33 is an example of this category. HiRise2 @cite_22 is an extended version of the HiRise constraint solver which solves hierarchies of linear constraints by applying an LU-decompos -ition-based simplex method.
|
{
"cite_N": [
"@cite_30",
"@cite_22",
"@cite_33"
],
"mid": [
"2157057346",
"1993924106",
""
],
"abstract": [
"In this paper, we propose a constrained least squares approach for stably computing Laplacian deformation with strict positional constraints. In the existing work on Laplacian deformation, strict positional constraints are described using large values of least squares weights, which often cause numerical problems when Laplacians are described using cotangent weights. In our method, we describe strict positional constraints as hard constraints. We solve the combination of hard and soft constraints by constructing a typical least squares matrix form using QR decomposition. In addition, our method can manage shape deformation under over-constraints, such as redundant and conflicting constraints. Our framework achieves excellent performance for interactive deformation of mesh models.",
"We propose a scalable algorithm called HiRise2 for incrementally solving soft linear constraints over real domains. It is based on a framework for soft constraints, known as constraint hierarchies, to allow effective modeling of user interface applications by using hierarchical preferences for constraints. HiRise2 introduces LU decompositions to improve the scalability of an incremental simplex method. Using this algorithm, we implemented a constraint solver. We also show the results of experiments on the performance of the solver.",
""
]
}
|
1309.7001
|
2951276839
|
The Kaczmarz method is an iterative method for solving large systems of equations that projects iterates orthogonally onto the solution space of each equation. In contrast to direct methods such as Gaussian elimination or QR-factorization, this algorithm is efficient for problems with sparse matrices, as they appear in constraint-based user interface (UI) layout specifications. However, the Kaczmarz method as described in the literature has its limitations: it considers only equality constraints and does not support soft constraints, which makes it inapplicable to the UI layout problem. In this paper we extend the Kaczmarz method for solving specifications containing soft constraints, using the prioritized IIS detection algorithm. Furthermore, the performance and convergence of the proposed algorithms are evaluated empirically using randomly generated UI layout specifications of various sizes. The results show that these methods offer improvements in performance over standard methods like Matlab's LINPROG, a well-known efficient linear programming solver.
|
Many different local propagation algorithms have been proposed for solving constraint hierarchies in UI layout. The DeltaBlue @cite_10 , SkyBlue @cite_31 and Detail @cite_43 algorithms are examples of this category.
|
{
"cite_N": [
"@cite_43",
"@cite_31",
"@cite_10"
],
"mid": [
"1485061130",
"1992337747",
"2053637323"
],
"abstract": [
"Local propagation is often used in graphical user interfaces to solve constraint systems that describe structures and layouts of figures. However, algorithms based on local propagation cannot solve simultaneous constraint systems because local propagation must solve constraints individually. We propose the ‘DETAIL’ algorithm, which efficiently solves systems of constraints with strengths, even if they must be solved simultaneously, by ‘dividing’ them as much as possible. In addition to multi-way constraints, it handles various other types of constraints, for example, constraints solved with the least squares method. Furthermore, it unifies the treatment of different types of constraints in a single system. We implemented a prototype constraint solver based on this algorithm, and evaluated its performance.",
"Many user interface toolkits use constraint solvers to maintain geometric relationships between graphic objects, or to connect the graphics to the application data structures. One efficient and flexible technique for maintaining constraints is multi-way local propagation, where constraints are represented by sets of method procedures. To satisfy a set of constraints, a local propagation solver executes one method from each constraint. SkyBlue is an incremental constraint solver that uses local propagation to maintain a set of constraints as individual constraints are added and removed. If all of the constraints cannot be satisfied, SkyBlue leaves weaker constraints unsatisfied in order to satisfy stronger constraints (maintaining a constraint hierarchy). SkyBlue is a more general successor to the DeltaBlue algorithm that satisfies cycles of methods by calling external cycle solvers and supports multi-output methods. These features make SkyBlue more useful for constructing user interfaces, since cycles of constraints can occur frequently in user interface applications and multi-output methods are necessary to represent some useful constraints. This paper discusses some of applications that use SkyBlue, presents times for some user interface benchmarks and describes the SkyBlue algorithm in detail.",
"An incremental constraint solver, the DeltaBlue algorithm maintains an evolving solution to the constraint hierarchy as constraints are added and removed. DeltaBlue minimizes the cost of finding a new solution after each change by exploiting its knowledge of the last solution."
]
}
|
1309.7001
|
2951276839
|
The Kaczmarz method is an iterative method for solving large systems of equations that projects iterates orthogonally onto the solution space of each equation. In contrast to direct methods such as Gaussian elimination or QR-factorization, this algorithm is efficient for problems with sparse matrices, as they appear in constraint-based user interface (UI) layout specifications. However, the Kaczmarz method as described in the literature has its limitations: it considers only equality constraints and does not support soft constraints, which makes it inapplicable to the UI layout problem. In this paper we extend the Kaczmarz method for solving specifications containing soft constraints, using the prioritized IIS detection algorithm. Furthermore, the performance and convergence of the proposed algorithms are evaluated empirically using randomly generated UI layout specifications of various sizes. The results show that these methods offer improvements in performance over standard methods like Matlab's LINPROG, a well-known efficient linear programming solver.
|
Besides methods for MaxFS there are also some methods to solve the IIS problem. These methods are: deletion filtering, IIS detection and grouping constraints. Deletion filtering @cite_23 removes constraints from the set of constraints and checks the feasibility of the reduced set. IIS detection @cite_39 starts with a single constraint and adds constraints successively. The grouping constraints method @cite_56 was introduced to speed up the aforementioned algorithms by adding or removing groups of constraints simultaneously. Even though these methods deal with the problem of finding a feasible subsystem, it is not possible to apply them directly. The main reason is that they do not consider prioritized constraints as we do in our approach.
|
{
"cite_N": [
"@cite_56",
"@cite_23",
"@cite_39"
],
"mid": [
"2113730096",
"2160089068",
"1981066696"
],
"abstract": [
"Algorithms and computer-based tools for analyzing infeasible linear and nonlinear programs have been developed in recent years, but few such tools exist for infeasible mixed-integer or integer linear programs. One approach that has proven especially useful for infeasible linear programs is the isolation of an Irreducible Infeasible Set of constraints (IIS), a subset of the constraints defining the overall linear program that is itself infeasible, but for which any proper subset is feasible. Isolating an IIS from the larger model speeds the diagnosis and repair of the model by focussing the analytic effort. This paper describes and tests algorithms for finding small infeasible sets in infeasible mixed-integer and integer linear programs; where possible these small sets are IISs.",
"With ongoing advances in hardware and software, the bottleneck in linear programming is no longer a model solution, it is the correct formulation of large models in the first place. During initial formulation (or modification), a very large model may prove infeasible, but it is often difficult to determine how to correct it. We present a formulation aid which analyzes infeasible LP s and identifies minimal sets of inconsistent constraints from among the perhaps very large set of constraints defining the problem. This information helps to focus the search for a diagnosis of the problem, speeding the repair of the model. We present a series of filtering routines and a final integrated algorithm which guarantees the identification of at least one minimal set of inconsistent constraints. This guarantee is a significant advantage over previous methods. The algorithms are simple, relatively efficient, and easily incorporated into standard LP solvers. Preliminary computational results are reported. INFORMS Journal on Computing , ISSN 1091-9856, was published as ORSA Journal on Computing from 1989 to 1995 under ISSN 0899-1499.",
"This paper presents ideas from goal programming (GP) used as an accompaniment to linear programming (LP) for the analysis of LP infeasibility. A new algorithm (GPIIS) for the detection of irreducibly inconsistent systems (IIS) of constraints is presented using this approach. The structure necessary for implementing such a procedure into a commercial LP solver is outlined. Results for a selection of infeasible LP problems are given, and conclusions drawn."
]
}
|
1309.6352
|
2951934610
|
Past work on personality detection has shown that frequency of lexical categories such as first person pronouns, past tense verbs, and sentiment words have significant correlations with personality traits. In this paper, for the first time, we show that fine affect (emotion) categories such as that of excitement, guilt, yearning, and admiration are significant indicators of personality. Additionally, we perform experiments to show that the gains provided by the fine affect categories are not obtained by using coarse affect categories alone or with specificity features alone. We employ these features in five SVM classifiers for detecting five personality traits through essays. We find that the use of fine emotion features leads to statistically significant improvement over a competitive baseline, whereas the use of coarse affect and specificity features does not.
|
Pennebaker and King used lexical categories from Linguistic Inquiry and Word Count (LIWC) to identify linguistic correlates of personality. http: www.liwc.net They showed, for example, that agreeability is characterized with more positive emotion words and fewer articles and that neurotism is characterized with more negative emotion words and more first-person pronouns. @ improved on these features and distribute their system online. http: people.csail.mit.edu francois research personality .html We use all of their features to create our baseline classifier--- the Mairesse baseline . Some of these features are listed below: word count, words per sentence, type token ratio, words longer than six letters, negations, assents, articles, prepositions, numbers, pronouns (first person, second person, third person), emotion words, cognition words ( insight, tentative ), sensory and perceptual words ( see, hear ), social processes words ( chat, friend ), time words, space words, motion words, punctuations, and swear words. Both Pennebaker and King and @ worked with the Essays dataset. More recently, there is also work on personality detection from blogs @cite_1 , Facebook posts @cite_3 , and Twitter posts and follower network @cite_11 . There also exist websites that analyze blogs and display the personality types of the authors. http: www.typealyzer.com
|
{
"cite_N": [
"@cite_1",
"@cite_3",
"@cite_11"
],
"mid": [
"",
"2153803020",
"2113166392"
],
"abstract": [
"",
"We show that easily accessible digital records of behavior, Facebook Likes, can be used to automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender. The analysis presented is based on a dataset of over 58,000 volunteers who provided their Facebook Likes, detailed demographic profiles, and the results of several psychometric tests. The proposed model uses dimensionality reduction for preprocessing the Likes data, which are then entered into logistic linear regression to predict individual psychodemographic profiles from Likes. The model correctly discriminates between homosexual and heterosexual men in 88 of cases, African Americans and Caucasian Americans in 95 of cases, and between Democrat and Republican in 85 of cases. For the personality trait “Openness,” prediction accuracy is close to the test–retest accuracy of a standard personality test. We give examples of associations between attributes and Likes and discuss implications for online personalization and privacy.",
"Microblogging services such as Twitter have become increasingly popular in recent years. However, little is known about how personality is manifested and perceived in microblogs. In this study, we measured the Big Five personality traits of 142 participants and collected their tweets over a 1-month period. Extraversion, agreeableness, openness, and neuroticism were associated with specific linguistic markers, suggesting that personality manifests in microblogs. Meanwhile, eight observers rated the participants’ personality on the basis of their tweets. Results showed that observers relied on specific linguistic cues when making judgments, and could only judge agreeableness and neuroticism accurately. This study provides new empirical evidence of personality expression in naturalistic settings, and points to the potential of utilizing social media for personality research."
]
}
|
1309.6347
|
2950693256
|
With the widespread use of email, we now have access to unprecedented amounts of text that we ourselves have written. In this paper, we show how sentiment analysis can be used in tandem with effective visualizations to quantify and track emotions in many types of mail. We create a large word--emotion association lexicon by crowdsourcing, and use it to compare emotions in love letters, hate mail, and suicide notes. We show that there are marked differences across genders in how they use emotion words in work-place email. For example, women use many words from the joy--sadness axis, whereas men prefer terms from the fear--trust axis. Finally, we show visualizations that can help people track emotions in their emails.
|
Over the last decade, there has been considerable work in sentiment analysis, especially in determining whether a term has a positive or negative polarity @cite_25 @cite_2 @cite_17 . There is also work in more sophisticated aspects of sentiment, for example, in detecting emotions such as anger, joy, sadness, fear, surprise, and disgust @cite_22 @cite_28 @cite_11 @cite_11 . The technology is still developing and it can be unpredictable when dealing with short sentences, but it has been shown to be reliable when drawing conclusions from large amounts of text @cite_5 @cite_6 .
|
{
"cite_N": [
"@cite_11",
"@cite_22",
"@cite_28",
"@cite_6",
"@cite_2",
"@cite_5",
"@cite_25",
"@cite_17"
],
"mid": [
"2168493061",
"2162555959",
"2162010436",
"2097726431",
"2168625136",
"2064178288",
"1548603425",
"2160250477"
],
"abstract": [
"In addition to information, text contains attitudinal, and more specifically, emotional content. This paper explores the text-based emotion prediction problem empirically, using supervised machine learning with the SNoW learning architecture. The goal is to classify the emotional affinity of sentences in the narrative domain of children's fairy tales, for subsequent usage in appropriate expressive rendering of text-to-speech synthesis. Initial experiments on a preliminary data set of 22 fairy tales show encouraging results over a naive baseline and BOW approach for classification of emotional versus non-emotional contents, with some dependency on parameter tuning. We also discuss results for a tripartite model which covers emotional valence, as well as feature set alternations. In addition, we present plans for a more cognitively sound sequential model, taking into consideration a larger set of basic emotions.",
"Though data-driven in nature, emotion analysis based on latent semantic analysis still relies on some measure of expert knowledge in order to isolate the emotional keywords or keysets necessary to the construction of affective categories. This makes it vulnerable to any discrepancy between the ensuing taxonomy of affective states and the underlying domain of discourse. This paper proposes a more general strategy which leverages two distincts semantic levels, one that encapsulates the foundations of the domain considered, and one that specifically accounts for the overall affective fabric of the language. Exposing the emergent relationship between these two levels advantageously informs the emotion classification process. Empirical evidence suggests that this is a promising solution for automatic emotion detection in text.",
"Even though considerable attention has been given to semantic orientation of words and the creation of large polarity lexicons, research in emotion analysis has had to rely on limited and small emotion lexicons. In this paper, we show how we create a high-quality, moderate-sized emotion lexicon using Mechanical Turk. In addition to questions about emotions evoked by terms, we show how the inclusion of a word choice question can discourage malicious data entry, help identify instances where the annotator may not be familiar with the target term (allowing us to reject such annotations), and help obtain annotations at sense level (rather than at word level). We perform an extensive analysis of the annotations to better understand the distribution of emotions evoked by terms of different parts of speech. We identify which emotions tend to be evoked simultaneously by the same term and show that certain emotions indeed go hand in hand.",
"An important part of our information-gathering behavior has always been to find out what other people think. With the growing availability and popularity of opinion-rich resources such as online review sites and personal blogs, new opportunities and challenges arise as people now can, and do, actively use information technologies to seek out and understand the opinions of others. The sudden eruption of activity in the area of opinion mining and sentiment analysis, which deals with the computational treatment of opinion, sentiment, and subjectivity in text, has thus occurred at least in part as a direct response to the surge of interest in new systems that deal directly with opinions as a first-class object. This survey covers techniques and approaches that promise to directly enable opinion-oriented information-seeking systems. Our focus is on methods that seek to address the new challenges raised by sentiment-aware applications, as compared to those that are already present in more traditional fact-based analysis. We include material on summarization of evaluative text and on broader issues regarding privacy, manipulation, and economic impact that the development of opinion-oriented information-access services gives rise to. To facilitate future work, a discussion of available resources, benchmark datasets, and evaluation campaigns is also provided.",
"The evaluative character of a word is called its semantic orientation. Positive semantic orientation indicates praise (e.g., \"honest\", \"intrepid\") and negative semantic orientation indicates criticism (e.g., \"disturbing\", \"superfluous\"). Semantic orientation varies in both direction (positive or negative) and degree (mild to strong). An automated system for measuring semantic orientation would have application in text classification, text filtering, tracking opinions in online discussions, analysis of survey responses, and automated chat systems (chatbots). This article introduces a method for inferring the semantic orientation of a word from its statistical association with a set of positive and negative paradigm words. Two instances of this approach are evaluated, based on two different statistical measures of word association: pointwise mutual information (PMI) and latent semantic analysis (LSA). The method is experimentally tested with 3,596 words (including adjectives, adverbs, nouns, and verbs) that have been manually labeled positive (1,614 words) and negative (1,982 words). The method attains an accuracy of 82.8p on the full test set, but the accuracy rises above 95p when the algorithm is allowed to abstain from classifying mild words.",
"The importance of quantifying the nature and intensity of emotional states at the level of populations is evident: we would like to know how, when, and why individuals feel as they do if we wish, for example, to better construct public policy, build more successful organizations, and, from a scientific perspective, more fully understand economic and social phenomena. Here, by incorporating direct human assessment of words, we quantify happiness levels on a continuous scale for a diverse set of large-scale texts: song titles and lyrics, weblogs, and State of the Union addresses. Our method is transparent, improvable, capable of rapidly processing Web-scale texts, and moves beyond approaches based on coarse categorization. Among a number of observations, we find that the happiness of song lyrics trends downward from the 1960s to the mid 1990s while remaining stable within genres, and that the happiness of blogs has steadily increased from 2005 to 2009, exhibiting a striking rise and fall with blogger age and distance from the Earth’s equator.",
"",
"Sentiment analysis often relies on a semantic orientation lexicon of positive and negative words. A number of approaches have been proposed for creating such lexicons, but they tend to be computationally expensive, and usually rely on significant manual annotation and large corpora. Most of these methods use WordNet. In contrast, we propose a simple approach to generate a high-coverage semantic orientation lexicon, which includes both individual words and multi-word expressions, using only a Roget-like thesaurus and a handful of affixes. Further, the lexicon has properties that support the Polyanna Hypothesis. Using the General Inquirer as gold standard, we show that our lexicon has 14 percentage points more correct entries than the leading WordNet-based high-coverage lexicon (SentiWordNet). In an extrinsic evaluation, we obtain significantly higher performance in determining phrase polarity using our thesaurus-based lexicon than with any other. Additionally, we explore the use of visualization techniques to gain insight into the our algorithm beyond the evaluations mentioned above."
]
}
|
1309.6347
|
2950693256
|
With the widespread use of email, we now have access to unprecedented amounts of text that we ourselves have written. In this paper, we show how sentiment analysis can be used in tandem with effective visualizations to quantify and track emotions in many types of mail. We create a large word--emotion association lexicon by crowdsourcing, and use it to compare emotions in love letters, hate mail, and suicide notes. We show that there are marked differences across genders in how they use emotion words in work-place email. For example, women use many words from the joy--sadness axis, whereas men prefer terms from the fear--trust axis. Finally, we show visualizations that can help people track emotions in their emails.
|
Automatically analyzing affect in emails has primarily been done for automatic gender identification @cite_10 @cite_0 , but it has relied on mostly on surface features such as exclamations and very small emotion lexicons. The WordNet Affect Lexicon (WAL) @cite_20 has a few hundred words annotated with associations to a number of affect categories including the six Ekman emotions (joy, sadness, anger, fear, disgust, and surprise). WAL: http: wndomains.fbk.eu wnaffect.html General Inquirer (GI) @cite_3 has 11,788 words labeled with 182 categories of word tags, including positive and negative polarity. GI: http: www.wjh.harvard.edu @math inquirer Affective Norms for English Words (ANEW) has pleasure (happy--unhappy), arousal (excited--calm), and dominance (controlled--in control) ratings for 1034 words. ANEW: http: csea.phhp.ufl.edu media anewmessage.html Mohammad and Turney compiled emotion annotations for about 4000 words with eight emotions (six of Ekman, trust, and anticipation).
|
{
"cite_N": [
"@cite_0",
"@cite_3",
"@cite_10",
"@cite_20"
],
"mid": [
"2100304601",
"2082291422",
"2096703131",
"2404480901"
],
"abstract": [
"This paper describes an investigation of authorship gender attribution mining from e-mail text documents. We used an extended set of predominantly topic content-free e-mail document features such as style markers, structural characteristics and gender-preferential language features together with a support vector machine learning algorithm. Experiments using a corpus of e-mail documents generated by a large number of authors of both genders gave promising results for author gender categorisation.",
"",
"In this paper, we investigate the topic of gender identification for short length, multi-genre, content-free e-mails. We introduce for the first time (to our knowledge), psycholinguistic and gender-linked cues for this problem, along with traditional stylometric features. Decision tree and Support Vector Machines learning algorithms are used to identify the gender of the author of a given e-mail. The experiment results show that our approach is promising with an average accuracy of 82.2 .",
"In this paper we present a linguistic resource for the lexical representation of affective knowledge. This resource (named WORDNETAFFECT) was developed starting from WORDNET, through a selection and tagging of a subset of synsets representing the affective"
]
}
|
1309.6297
|
2042971572
|
We introduce novel mathematical models and algorithms to generate (shortest or k different) explanations for biomedical queries, using answer set programming. We implement these algorithms and integrate them in BioQuery-ASP. We illustrate the usefulness of these methods with some complex biomedical queries related to drug discovery, over the biomedical knowledge resources PharmGKB, DrugBank, BioGRID, CTD, SIDER, Disease Ontology, and Orphadata.
|
The most recent work related to explanation generation in ASP are @cite_10 @cite_13 @cite_1 @cite_31 @cite_35 @cite_34 , in the context of debugging ASP programs. Among those, @cite_13 studies why a program does not have an answer set, and @cite_1 @cite_35 studies why a set of atoms is not an answer set. As we study the problem of explaining the reasons why atoms are in the answer set, our work differs from those two work.
|
{
"cite_N": [
"@cite_35",
"@cite_10",
"@cite_1",
"@cite_31",
"@cite_34",
"@cite_13"
],
"mid": [
"2169041689",
"1495291840",
"2171836055",
"2138596350",
"1555658928",
""
],
"abstract": [
"An important issue towards a broader acceptance of answer-set programming (ASP) is the deployment of tools which support the programmer during the coding phase. In particular, methods for debugging an answer-set program are recognised as a crucial step in this regard. Initial work on debugging in ASP mainly focused on propositional programs, yet practical debuggers need to handle programs with variables as well. In this paper, we discuss a debugging technique that is directly geared towards non-ground programs. Following previous work, we address the central debugging question why some interpretation is not an answer set. The explanations provided by our method are computed by means of a meta-programming technique, using a uniform encoding of a debugging request in terms of ASP itself. Our method also permits programs containing comparison predicates and integer arithmetics, thus covering a relevant language class commonly supported by all state-of-the-art ASP solvers.",
"This paper discusses the background, algorithms and implementation techniques to support programmers in 'debugging' logic programs under the answer set semantics. We first investigate what constitutes an error in such programs and which classes of errors exist. This is used to motivate techniques and algorithms that respectively eliminate certain classes of errors and explain how an error occurred and why it occurred. Finally, details of the IDEAS (Interactive Development and Evaluation tool for Answer Set Semantics) system are given, a prototype version of which implements all of the techniques described.",
"Answer-set programming (ASP) is widely recognised as a viable tool for declarative problem solving. However, there is currently a lack of tools for developing answer-set programs. In particular, providing tools for debugging answer-set programs has recently been identified as a crucial prerequisite for a wider acceptance of ASP. In this paper, we introduce a meta-programming technique for debugging in ASP. The basic question we address is why interpretations expected to be answer sets are not answer sets of the program to debug. We thus deal with finding semantical errors of programs. The explanations provided by our method are based on an intuitive scheme of errors that relies on a recent characterisation of the answer-set semantics. Furthermore, as we are using a meta-programming technique, debugging queries are expressed in terms of answer-set programs themselves, which has several benefits: For one, we can directly use ASP solvers for processing debugging queries. Indeed, our technique can easily be implemented, and we devised a corresponding prototype debugging system. Also, our approach respects the declarative nature of ASP, and the capabilities of the system can easily be extended to incorporate differing debugging features.",
"The paper introduces the notion of offline justification for answer set programming (ASP). Justifications provide a graph-based explanation of the truth value of an atom with respect to a given answer set. The paper extends also this notion to provide justification of atoms during the computation of an answer set (on-line justification) and presents an integration of online justifications within the computation model of SMODELS. Offline and online justifications provide useful tools to enhance understanding of ASP, and they offer a basic data structure to support methodologies and tools for debugging answer set programs. A preliminary implementation has been developed in .",
"We introduce a framework for interactive stepping through an answerset program as a means for debugging. In procedural languages, stepping is a widespread and effective debugging strategy. The idea is to gain insight into the behaviour of a program by executing statement by statement, following the program's control flow. Stepping has not been considered for answer-set programs so far, presumably because of their lack of a control flow. The framework we provide allows for stepwise constructing interpretations following the user's intuition on which rule instances to become active. That is, we do not impose any ordering on the rules but give the programmer the freedom to guide the stepping process. Due to simple syntactic restrictions, each step results in a state that guarantees stability of the intermediate interpretation. We present how stepping can be started from breakpoints as in conventional programming and discuss how the approach can be used for debugging using a running example.",
""
]
}
|
1309.6297
|
2042971572
|
We introduce novel mathematical models and algorithms to generate (shortest or k different) explanations for biomedical queries, using answer set programming. We implement these algorithms and integrate them in BioQuery-ASP. We illustrate the usefulness of these methods with some complex biomedical queries related to drug discovery, over the biomedical knowledge resources PharmGKB, DrugBank, BioGRID, CTD, SIDER, Disease Ontology, and Orphadata.
|
@cite_10 , similar to our work, the question why is an atom @math in an answer set @math for an ASP program @math '' is studied. As an answer to this question, the authors of @cite_10 provide the rule in @math that supports @math with respect to @math ; whereas we compute shortest or @math different explanations (as a tree whose vertices are labeled by rules).
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"1495291840"
],
"abstract": [
"This paper discusses the background, algorithms and implementation techniques to support programmers in 'debugging' logic programs under the answer set semantics. We first investigate what constitutes an error in such programs and which classes of errors exist. This is used to motivate techniques and algorithms that respectively eliminate certain classes of errors and explain how an error occurred and why it occurred. Finally, details of the IDEAS (Interactive Development and Evaluation tool for Answer Set Semantics) system are given, a prototype version of which implements all of the techniques described."
]
}
|
1309.6297
|
2042971572
|
We introduce novel mathematical models and algorithms to generate (shortest or k different) explanations for biomedical queries, using answer set programming. We implement these algorithms and integrate them in BioQuery-ASP. We illustrate the usefulness of these methods with some complex biomedical queries related to drug discovery, over the biomedical knowledge resources PharmGKB, DrugBank, BioGRID, CTD, SIDER, Disease Ontology, and Orphadata.
|
PontelliSE09 also introduce the notion of an online justification that aims to justify the truth values of atoms during the computation of an answer set. @cite_34 , a framework where the users can construct interpretations through an interactive stepping process is introduced. As a result, @cite_31 and @cite_34 can be used together to provide the users with justifications of the truth values of atoms during the construction of interpretations interactively through stepping.
|
{
"cite_N": [
"@cite_31",
"@cite_34"
],
"mid": [
"2138596350",
"1555658928"
],
"abstract": [
"The paper introduces the notion of offline justification for answer set programming (ASP). Justifications provide a graph-based explanation of the truth value of an atom with respect to a given answer set. The paper extends also this notion to provide justification of atoms during the computation of an answer set (on-line justification) and presents an integration of online justifications within the computation model of SMODELS. Offline and online justifications provide useful tools to enhance understanding of ASP, and they offer a basic data structure to support methodologies and tools for debugging answer set programs. A preliminary implementation has been developed in .",
"We introduce a framework for interactive stepping through an answerset program as a means for debugging. In procedural languages, stepping is a widespread and effective debugging strategy. The idea is to gain insight into the behaviour of a program by executing statement by statement, following the program's control flow. Stepping has not been considered for answer-set programs so far, presumably because of their lack of a control flow. The framework we provide allows for stepwise constructing interpretations following the user's intuition on which rule instances to become active. That is, we do not impose any ordering on the rules but give the programmer the freedom to guide the stepping process. Due to simple syntactic restrictions, each step results in a state that guarantees stability of the intermediate interpretation. We present how stepping can be started from breakpoints as in conventional programming and discuss how the approach can be used for debugging using a running example."
]
}
|
1309.5503
|
2004484707
|
When a user views an archived page using the archive's user interface (UI), the user selects a datetime to view from a list. The archived web page, if available, is then displayed. From this display, the web archive UI attempts to simulate the web browsing experience by smoothly transitioning between archived pages. During this process, the target datetime changes with each link followed; drifting away from the datetime originally selected. When browsing sparsely-archived pages, this nearly-silent drift can be many years in just a few clicks. We conducted 200,000 acyclic walks of archived pages, following up to 50 links per walk, comparing the results of two target datetime policies. The Sliding Target policy allows the target datetime to change as it does in archive UIs such as the Internet Archive's Wayback Machine. The Sticky Target policy, represented by the Memento API, keeps the target datetime the same throughout the walk. We found that the Sliding Target policy drift increases with the number of walk steps, number of domains visited, and choice (number of links available). However, the Sticky Target policy controls temporal drift, holding it to less than 30 days on average regardless of walk length or number of domains visited. The Sticky Target policy shows some increase as choice increases, but this may be caused by other factors. We conclude that based on walk length, the Sticky Target policy generally produces at least 30 days less drift than the Sliding Target policy.
|
Although the need for web archiving has been understood since nearly the dawn of the Web @cite_24 , these efforts have been for the most part independent in motivation, requirements, and scope. The Internet Archive, the first archive to attempt global scope, came into existence in 1995 @cite_14 . Since then, many other archives have come into existence. Some of these use software developed by the Internet Archive and have similar capture behavior and user interfaces; however, other archives such as WebCite @cite_8 have significantly different capture behaviors.
|
{
"cite_N": [
"@cite_24",
"@cite_14",
"@cite_8"
],
"mid": [
"2160889741",
"260381239",
"1994530080"
],
"abstract": [
"Although librarians recognize the Internet as a resource for knowledge and information, they have yet to make a formal effort to collect and preserve the Web sites found there. This paper addresses not only the need to set up a cyberarchive but also some of the issues involved. With Web sites appearing and disappearing constantly from the Internet, the time to save them is now—before we lose a precious thread in our cultural and intellectual history.",
"",
"Scholars are increasingly citing electronic “web references” which are not preserved in libraries or full text archives. WebCite is a new standard for citing web references. To “webcite” a document involves archiving the cited Web page through www.webcitation.org and citing the WebCite permalink instead of (or in addition to) the unstable live Web page. This journal has amended its “instructions for authors” accordingly, asking authors to archive cited Web pages before submitting a manuscript. Almost 200 other journals are already using the system. We discuss the rationale for WebCite, its technology, and how scholars, editors, and publishers can benefit from the service. Citing scholars initiate an archiving process of all cited Web references, ideally before they submit a manuscript. Authors of online documents and websites which are expected to be cited by others can ensure that their work is permanently available by creating an archived copy using WebCite and providing the citation information including the WebCite link on their Web document(s). Editors should ask their authors to cache all cited Web addresses (Uniform Resource Locators, or URLs) “prospectively” before submitting their manuscripts to their journal. Editors and publishers should also instruct their copyeditors to cache cited Web material if the author has not done so already. Finally, WebCite can process publisher submitted “citing articles” (submitted for example as eXtensible Markup Language [XML] documents) to automatically archive all cited Web pages shortly before or on publication. Finally, WebCite can act as a focussed crawler, caching retrospectively references of already published articles. Copyright issues are addressed by honouring respective Internet standards (robot exclusion files, no-cache and no-archive tags). Long-term preservation is ensured by agreements with libraries and digital preservation organizations. The resulting WebCite Index may also have applications for research assessment exercises, being able to measure the impact of Web services and published Web documents through access and Web citation metrics. @PARASPLIT [J Med Internet Res 2005;7(5):e60]"
]
}
|
1309.5571
|
2070108936
|
Abstract In the numerical analysis of strongly correlated quantum lattice models one of the leading algorithms developed to balance the size of the effective Hilbert space and the accuracy of the simulation is the density matrix renormalization group (DMRG) algorithm, in which the run-time is dominated by the iterative diagonalization of the Hamilton operator. As the most time-dominant step of the diagonalization can be expressed as a list of dense matrix operations, the DMRG is an appealing candidate to fully utilize the computing power residing in novel kilo-processor architectures. In the paper a smart hybrid CPU–GPU implementation is presented, which exploits the power of both CPU and GPU and tolerates problems exceeding the GPU memory size. Furthermore, a new CUDA kernel has been designed for asymmetric matrix–vector multiplication to accelerate the rest of the diagonalization. Besides the evaluation of the GPU implementation, the practical limits of an FPGA implementation are also discussed.
|
DMRG is a variational numerical approach developed to treat low-dimensional interacting many-body quantum systems efficiently @cite_32 @cite_4 @cite_15 . In fact, it has become an exceptionally successful method to study the low energy physics of strongly correlated quantum systems which exhibit chain-like entanglement structure @cite_11 . For example, it can be applied to simulate properties of anisotropic materials, such as polymers @cite_25 , or to describe accurately the electronic structure of open @math shell molecules @cite_19 , which is beyond the capability of standard quantum chemical approaches. Additionally, the interacting system of atoms trapped in an optical lattice, proposed as physical implementation of quantum computer, is also tractable via DMRG @cite_2 .
|
{
"cite_N": [
"@cite_4",
"@cite_32",
"@cite_19",
"@cite_2",
"@cite_15",
"@cite_25",
"@cite_11"
],
"mid": [
"1680853461",
"2037768897",
"1972507693",
"1963988846",
"2154815292",
"1620689334",
"2140936017"
],
"abstract": [
"In these lecture notes, we present a pedagogical review of a number of related numerically exact approaches to quantum many‐body problems. In particular, we focus on methods based on the exact diagonalization of the Hamiltonian matrix and on methods extending exact diagonalization using renormalization group ideas, i.e., Wilson’s Numerical Renormalization Group (NRG) and White’s Density Matrix Renormalization Group (DMRG). These methods are standard tools for the investigation of a variety of interacting quantum systems, especially low‐dimensional quantum lattice models. We also survey extensions to the methods to calculate properties such as dynamical quantities and behavior at finite temperature, and discuss generalizations of the DMRG method to a wider variety of systems, such as classical models and quantum chemical problems. Finally, we briefly review some recent developments for obtaining a more general formulation of the DMRG in the context of matrix product states as well as recent progress in cal...",
"A generalization of the numerical renormalization-group procedure used first by Wilson for the Kondo problem is presented. It is shown that this formulation is optimal in a certain sense. As a demonstration of the effectiveness of this approach, results from numerical real-space renormalization-group calculations for Heisenberg chains are presented.",
"We have studied transition met al clusters from a quantum information theory perspective using the density-matrix renormalization group (DMRG) method. We demonstrate the competition between entanglement and interaction localization and discuss the application of the configuration interaction-based dynamically extended active space procedure, which significantly reduces the effective system size and accelerates the speed of convergence for complicated molecular electronic structures. Our results indicate the importance of taking entanglement among molecular orbitals into account in order to devise an optimal DMRG orbital ordering and carry out efficient calculations on transition met al clusters. Apart from these algorithmic observations, which lead to a recipe for black-box DMRG calculations, our work provides physical understanding of electron correlation in molecular and cluster structures in terms of entropy measures of relevance also to recent work on tensor-network representations of electronic states. We also identify those molecular orbitals which are highly entangled and discuss the consequences for chemical bonding and for the structural transition from an dioxygen binding copper cluster to an bis-oxygen-bridged system with broken O-O bond.",
"We review recent developments in the physics of ultracold atomic and molecular gases in optical lattices. Such systems are nearly perfect realisations of various kinds of Hubbard models, and as such may very well serve to mimic condensed matter phenomena. We show how these systems may be employed as quantum simulators to answer some challenging open questions of condensed matter, and even high energy physics. After a short presentation of the models and the methods of treatment of such systems, we discuss in detail, which challenges of condensed matter physics can be addressed with (i) disordered ultracold lattice gases, (ii) frustrated ultracold gases, (iii) spinor lattice gases, (iv) lattice gases in “artificial” magnetic fields, and, last but not least, (v) quantum information processing in lattice gases. For completeness, also some recent progress related to the above topics with trapped cold gases will be discussed. Motto: There are more things in heaven and earth, Horatio, Than are dreamt of in your...",
"The density-matrix renormalization group (DMRG) is a numerical algorithm for the efficient truncation of the Hilbert space of low-dimensional strongly correlated quantum systems based on a rather general decimation prescription. This algorithm has achieved unprecedented precision in the description of one-dimensional quantum systems. It has therefore quickly become the method of choice for numerical studies of such systems. Its applications to the calculation of static, dynamic, and thermodynamic quantities in these systems are reviewed here. The potential of DMRG applications in the fields of two-dimensional quantum systems, quantum chemistry, three-dimensional small grains, nuclear physics, equilibrium and nonequilibrium statistical physics, and time-dependent phenomena is also discussed. This review additionally considers the theoretical foundations of the method, examining its relationship to matrix-product states and the quantum information content of the density matrices generated by the DMRG.",
"1. Introduction to Conjugated Polymers 2. Pi-Electron Theory of Conjugated Polymers 3. Non-Interacting Electrons 4. Electron-Lattice Coupling I: Non-Interacting Electrons 5. Interacting Electrons 6. Excitons in Conjugated Polymers 7. Electron-Lattice Coupling II: Interacting Electrons 8. Optical Processes in Conjugated Polymers 9. Electronic Processes in Conjugated Polymers 10. Linear Polyenes and Trans-Polyacetylene 11. Light Emitting Polymers A. Dirac Bra-ket Operator Representation of One-Particle Hamiltonians B. Particle-Hole Symmetry and Average Occupation Number C. Single-Particle Eigensolutions of a Periodic Polymer Chain D. Derivation of the Effective-Particle Schroedinger Equation E. Hydrogenic Solutions to the Effective-Particle Exciton Models F. Evaluation of the Electronic Transition Dipole Moments G. Valence-Bond Description of Benzene H. Density Matrix Renormalization Group Method",
"In the past few years, there has been an increasingly active exchange of ideas and methods between the formerly rather disjunct fields of quantum information and many-body physics. This has been due, on the one hand, to the growing sophisti- cation of methods and the increasing complexity of problems treated in quantum information theory, and, on the other, to the recognition that a number of central issues in many-body quantum systems can fruitfully be approached from the quan- tum information point of view. Nowhere has this been more evident than in the context of the family of numerical methods that go under the rubric density-matrix renormalization group. In particular, the concept of entanglement and its definition, measurement, and manipulation lies at the heart of much of quantum information theory (1). The density-matrix renormalization group (DMRG) methods use proper- ties of the entanglement of a bipartite system to build up an accurate approximation to particular many-body wave functions. The cross-fertilization between the two fields has led to improvements in the understanding of interacting quantum systems in general and the DMRG method in particular, has led to new algorithms related to and generalizing the DMRG, and has opened up the possibility of studying many new physical problems, ones of interest both for quantum information theory and for understanding the behavior of strongly correlated quantum systems (2). In this line, we discuss some relevant concepts in quantum information theory, including the relation between the DMRG and data compression and entanglement. As an application, we will use the quantum information entropy calculated with the DMRG to study quantum phase transitions, in particular in the bilinear-biquadratic spin-one chain and in the frustrated spin-1 2 Heisenberg chain."
]
}
|
1309.5571
|
2070108936
|
Abstract In the numerical analysis of strongly correlated quantum lattice models one of the leading algorithms developed to balance the size of the effective Hilbert space and the accuracy of the simulation is the density matrix renormalization group (DMRG) algorithm, in which the run-time is dominated by the iterative diagonalization of the Hamilton operator. As the most time-dominant step of the diagonalization can be expressed as a list of dense matrix operations, the DMRG is an appealing candidate to fully utilize the computing power residing in novel kilo-processor architectures. In the paper a smart hybrid CPU–GPU implementation is presented, which exploits the power of both CPU and GPU and tolerates problems exceeding the GPU memory size. Furthermore, a new CUDA kernel has been designed for asymmetric matrix–vector multiplication to accelerate the rest of the diagonalization. Besides the evaluation of the GPU implementation, the practical limits of an FPGA implementation are also discussed.
|
Graphical processing unit has been successfully employed in neighboring research areas to accelerate matrix operations. @cite_1 GPU is used to accelerate tensor contractions in plaquette renormalization states (PRS), which can be regarded as an alternative technique to tensor product states (TPS) or the DMRG algorithm. @cite_18 the second-order spectral projection (SP2) algorithm has been accelerated, which is an alternative technique to calculate the density matrix via a recursive series of generalized matrix-matrix multiplications.
|
{
"cite_N": [
"@cite_18",
"@cite_1"
],
"mid": [
"2318319753",
"2171520481"
],
"abstract": [
"The self-consistent solution of a Schrodinger-like equation for the density matrix is a critical and computationally demanding step in quantum-based models of interatomic bonding. This step was tackled historically via the diagonalization of the Hamiltonian. We have investigated the performance and accuracy of the second-order spectral projection (SP2) algorithm for the computation of the density matrix via a recursive expansion of the Fermi operator in a series of generalized matrix–matrix multiplications. We demonstrate that owing to its simplicity, the SP2 algorithm [Niklasson, A. M. N. Phys. Rev. B2002, 66, 155115] is exceptionally well suited to implementation on graphics processing units (GPUs). The performance in double and single precision arithmetic of a hybrid GPU central processing unit (CPU) and full GPU implementation of the SP2 algorithm exceed those of a CPU-only implementation of the SP2 algorithm and traditional matrix diagonalization when the dimensions of the matrices exceed about 2000 ...",
"Abstract We use the graphical processing unit (GPU) to accelerate the tensor contractions, which is the most time consuming operations in the variational method based on the plaquette renormalized states. Using a frustrated Heisenberg J 1 – J 2 model on a square lattice as an example, we implement the algorithm based on the compute unified device architecture (CUDA). For a single plaquette contraction with the bond dimensions C = 3 of each rank of the tensor, results are obtained 25 times faster on GPU than on a current CPU core. This makes it possible to simulate systems with the size 8 × 8 and larger, which are extremely time consuming on a single CPU. This technology successfully relieves the computing time dependence with C , while in the CPU serial computation, the total required time scales both with C and the system size."
]
}
|
1309.5909
|
2950494993
|
Today we have access to unprecedented amounts of literary texts. However, search still relies heavily on key words. In this paper, we show how sentiment analysis can be used in tandem with effective visualizations to quantify and track emotions in both individual books and across very large collections. We introduce the concept of emotion word density, and using the Brothers Grimm fairy tales as example, we show how collections of text can be organized for better search. Using the Google Books Corpus we show how to determine an entity's emotion associations from co-occurring words. Finally, we compare emotion words in fairy tales and novels, to show that fairy tales have a much wider range of emotion word densities than novels.
|
Over the last decade, there has been considerable work in sentiment analysis, especially in determining whether a term has a positive or negative polarity @cite_19 @cite_2 @cite_15 . There is also work in more sophisticated aspects of sentiment, for example, in detecting emotions such as anger, joy, sadness, fear, surprise, and disgust @cite_18 @cite_21 @cite_12 @cite_12 . The technology is still developing and it can be unpredictable when dealing with short sentences, but it has been shown to be reliable when drawing conclusions from large amounts of text @cite_5 @cite_9 .
|
{
"cite_N": [
"@cite_18",
"@cite_9",
"@cite_21",
"@cite_19",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_12"
],
"mid": [
"2162555959",
"2097726431",
"2162010436",
"",
"2168625136",
"",
"2160250477",
"2168493061"
],
"abstract": [
"Though data-driven in nature, emotion analysis based on latent semantic analysis still relies on some measure of expert knowledge in order to isolate the emotional keywords or keysets necessary to the construction of affective categories. This makes it vulnerable to any discrepancy between the ensuing taxonomy of affective states and the underlying domain of discourse. This paper proposes a more general strategy which leverages two distincts semantic levels, one that encapsulates the foundations of the domain considered, and one that specifically accounts for the overall affective fabric of the language. Exposing the emergent relationship between these two levels advantageously informs the emotion classification process. Empirical evidence suggests that this is a promising solution for automatic emotion detection in text.",
"An important part of our information-gathering behavior has always been to find out what other people think. With the growing availability and popularity of opinion-rich resources such as online review sites and personal blogs, new opportunities and challenges arise as people now can, and do, actively use information technologies to seek out and understand the opinions of others. The sudden eruption of activity in the area of opinion mining and sentiment analysis, which deals with the computational treatment of opinion, sentiment, and subjectivity in text, has thus occurred at least in part as a direct response to the surge of interest in new systems that deal directly with opinions as a first-class object. This survey covers techniques and approaches that promise to directly enable opinion-oriented information-seeking systems. Our focus is on methods that seek to address the new challenges raised by sentiment-aware applications, as compared to those that are already present in more traditional fact-based analysis. We include material on summarization of evaluative text and on broader issues regarding privacy, manipulation, and economic impact that the development of opinion-oriented information-access services gives rise to. To facilitate future work, a discussion of available resources, benchmark datasets, and evaluation campaigns is also provided.",
"Even though considerable attention has been given to semantic orientation of words and the creation of large polarity lexicons, research in emotion analysis has had to rely on limited and small emotion lexicons. In this paper, we show how we create a high-quality, moderate-sized emotion lexicon using Mechanical Turk. In addition to questions about emotions evoked by terms, we show how the inclusion of a word choice question can discourage malicious data entry, help identify instances where the annotator may not be familiar with the target term (allowing us to reject such annotations), and help obtain annotations at sense level (rather than at word level). We perform an extensive analysis of the annotations to better understand the distribution of emotions evoked by terms of different parts of speech. We identify which emotions tend to be evoked simultaneously by the same term and show that certain emotions indeed go hand in hand.",
"",
"The evaluative character of a word is called its semantic orientation. Positive semantic orientation indicates praise (e.g., \"honest\", \"intrepid\") and negative semantic orientation indicates criticism (e.g., \"disturbing\", \"superfluous\"). Semantic orientation varies in both direction (positive or negative) and degree (mild to strong). An automated system for measuring semantic orientation would have application in text classification, text filtering, tracking opinions in online discussions, analysis of survey responses, and automated chat systems (chatbots). This article introduces a method for inferring the semantic orientation of a word from its statistical association with a set of positive and negative paradigm words. Two instances of this approach are evaluated, based on two different statistical measures of word association: pointwise mutual information (PMI) and latent semantic analysis (LSA). The method is experimentally tested with 3,596 words (including adjectives, adverbs, nouns, and verbs) that have been manually labeled positive (1,614 words) and negative (1,982 words). The method attains an accuracy of 82.8p on the full test set, but the accuracy rises above 95p when the algorithm is allowed to abstain from classifying mild words.",
"",
"Sentiment analysis often relies on a semantic orientation lexicon of positive and negative words. A number of approaches have been proposed for creating such lexicons, but they tend to be computationally expensive, and usually rely on significant manual annotation and large corpora. Most of these methods use WordNet. In contrast, we propose a simple approach to generate a high-coverage semantic orientation lexicon, which includes both individual words and multi-word expressions, using only a Roget-like thesaurus and a handful of affixes. Further, the lexicon has properties that support the Polyanna Hypothesis. Using the General Inquirer as gold standard, we show that our lexicon has 14 percentage points more correct entries than the leading WordNet-based high-coverage lexicon (SentiWordNet). In an extrinsic evaluation, we obtain significantly higher performance in determining phrase polarity using our thesaurus-based lexicon than with any other. Additionally, we explore the use of visualization techniques to gain insight into the our algorithm beyond the evaluations mentioned above.",
"In addition to information, text contains attitudinal, and more specifically, emotional content. This paper explores the text-based emotion prediction problem empirically, using supervised machine learning with the SNoW learning architecture. The goal is to classify the emotional affinity of sentences in the narrative domain of children's fairy tales, for subsequent usage in appropriate expressive rendering of text-to-speech synthesis. Initial experiments on a preliminary data set of 22 fairy tales show encouraging results over a naive baseline and BOW approach for classification of emotional versus non-emotional contents, with some dependency on parameter tuning. We also discuss results for a tripartite model which covers emotional valence, as well as feature set alternations. In addition, we present plans for a more cognitively sound sequential model, taking into consideration a larger set of basic emotions."
]
}
|
1309.5909
|
2950494993
|
Today we have access to unprecedented amounts of literary texts. However, search still relies heavily on key words. In this paper, we show how sentiment analysis can be used in tandem with effective visualizations to quantify and track emotions in both individual books and across very large collections. We introduce the concept of emotion word density, and using the Brothers Grimm fairy tales as example, we show how collections of text can be organized for better search. Using the Google Books Corpus we show how to determine an entity's emotion associations from co-occurring words. Finally, we compare emotion words in fairy tales and novels, to show that fairy tales have a much wider range of emotion word densities than novels.
|
Automatic analysis of emotions in text has so far had to rely on small emotion lexicons. The WordNet Affect Lexicon (WAL) @cite_17 has a few hundred words annotated with associations to a number of affect categories including the six Ekman emotions (joy, sadness, anger, fear, disgust, and surprise). WAL: http: wndomains.fbk.eu wnaffect.html General Inquirer (GI) @cite_3 has 11,788 words labeled with 182 categories of word tags, including positive and negative polarity. GI: http: www.wjh.harvard.edu @math inquirer We use the NRC Emotion Lexicon @cite_8 @cite_21 , a large set of human-provided word--emotion association ratings, in our experiments. Please send an e-mail to saif.mohammad@nrc-cnrc.gc.ca to obtain the latest version of the NRC Emotion Lexicon.
|
{
"cite_N": [
"@cite_8",
"@cite_21",
"@cite_3",
"@cite_17"
],
"mid": [
"2950693256",
"2162010436",
"2082291422",
"2404480901"
],
"abstract": [
"With the widespread use of email, we now have access to unprecedented amounts of text that we ourselves have written. In this paper, we show how sentiment analysis can be used in tandem with effective visualizations to quantify and track emotions in many types of mail. We create a large word--emotion association lexicon by crowdsourcing, and use it to compare emotions in love letters, hate mail, and suicide notes. We show that there are marked differences across genders in how they use emotion words in work-place email. For example, women use many words from the joy--sadness axis, whereas men prefer terms from the fear--trust axis. Finally, we show visualizations that can help people track emotions in their emails.",
"Even though considerable attention has been given to semantic orientation of words and the creation of large polarity lexicons, research in emotion analysis has had to rely on limited and small emotion lexicons. In this paper, we show how we create a high-quality, moderate-sized emotion lexicon using Mechanical Turk. In addition to questions about emotions evoked by terms, we show how the inclusion of a word choice question can discourage malicious data entry, help identify instances where the annotator may not be familiar with the target term (allowing us to reject such annotations), and help obtain annotations at sense level (rather than at word level). We perform an extensive analysis of the annotations to better understand the distribution of emotions evoked by terms of different parts of speech. We identify which emotions tend to be evoked simultaneously by the same term and show that certain emotions indeed go hand in hand.",
"",
"In this paper we present a linguistic resource for the lexical representation of affective knowledge. This resource (named WORDNETAFFECT) was developed starting from WORDNET, through a selection and tagging of a subset of synsets representing the affective"
]
}
|
1309.5826
|
2273168557
|
This paper investigates the opportunities and limitations of adaptive virtual machine (VM) migration to reduce communication costs in a virtualized environment. We introduce a new formal model for the problem of online VM migration in two scenarios: (1) VMs can be migrated arbitrarily in the substrate network; e.g., a private cloud provider may have an incentive to reduce the overall communication cost in the network. (2) VMs can only be migrated within a given tenant; e.g., a user that was assigned a set of physical machines may exchange the functionality of the VMs on these machines. We propose a simple class of Destination-Swap algorithms which are based on an aggressive collocation strategy (inspired by splay datastructures) and which maintain a minimal and local amount of per-node (amortized cost) information to decide where to migrate a VM and how; thus, the algorithms react quickly to changes in the load. The algorithms come in two main flavors, an indirect and distributed variant which keeps existing VM placements local, and a direct variant which keeps the number of affected VMs small. We show that naturally, inter-tenant optimizations yield a larger potential for optimization, but generally also a tenant itself can improve its embedding. Moreover, there exists an interesting tradeoff between direct and indirect strategies: indirect variants are preferable under skewed and sparse communication patterns due to their locality properties.
|
Today's networks become more and more dynamic in the sense that they are able to self-adjust to the network state, user demand, or even energy cost, and the benefits of process migration have been exploited long before the emergence of the cloud computing paradigm, e.g., for load-balancing applications @cite_27 . Generally, applications range from self-optimizing peer-to-peer topologies over green computing (e.g., due to reduced energy consumption) @cite_3 to adaptive virtual machine migrations in datacenters @cite_29 , microprocessor memory architectures @cite_6 , grids @cite_22 or elastic virtual and wide-area cloud networks @cite_30 . Other self-adjusting routing scheme were considered, e.g., in scale-free networks to overcome congestion @cite_18 .
|
{
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_22",
"@cite_29",
"@cite_6",
"@cite_3",
"@cite_27"
],
"mid": [
"2050714144",
"2016888862",
"",
"2066741260",
"2055128632",
"148964820",
"2068666958"
],
"abstract": [
"This paper attends to the problem of embedding flexibly specified virtual networks connecting cloud resources (e.g., storage or computation) on a given substrate (e.g., a data center, an ISP backbone, a router site, or a virtual provider network). We study a scenario where a substrate provider (or a potential intermediate broker or reseller) wants to optimize the embedding of these so-called by migrating them to more suitable locations. For instance, such re-embeddings can be useful if the were requested at short notice and initially placed heuristically. Subsequent optimizations can, e.g., reduce the peak resource loads in the network by spreading across the infrastructure or save energy by moving together and switching off unused components. We present the generic mathematical programming algorithm used in our prototype to compute optimal embeddings. For example, this algorithm supports different objective functions (such as load minimization or energy conservation), arbitrary resource combinations and the mapping of multiple virtual nodes of a to a single substrate node, cost-aware migrations, and it can deal with all link types that arise in practice (e.g., full-duplex or even wireless or wired broadcast links with multiple endpoints). Of course, such rigorous optimizations are time consuming, and we report on the time complexities obtained from our experiments with our network virtualization prototype architecture. It turns out that optimizing over moderate sized infrastructures is feasible, even for scenarios with high flexibility and without tuning the solver software to speed up computations further.",
"We consider the effects of time-varying packet generation rates in the performance of communication networks. The time variations could be a result of the patterns in human activities. As a model, we study the effects of a degree-dependent packet generation rate that includes a sinusoidal term. Applying a modified traffic awareness protocol (TAP) previously proposed for static packet generation rates to the present situation leads to an altered value of the optimization parameter, when compared to that obtained in the static case. To enhance the performance and to cope with the time-varying effects better, we propose a class of self-adjusting traffic awareness protocols that makes use of instantaneous traffic information beyond that included in the modified TAP. Two special cases that make use of global and local information, respectively, are studied. Comparing results of our proposal schemes with the modified TAP, it is shown that the present self-adjusting schemes perform more effectively.",
"",
"The goal of data center network is to interconnect the massive number of data center servers, and provide efficient and fault-tolerant routing service to upper-layer applications. To overcome the problem of tree architecture in current practice, many new network architectures are proposed, represented by Fat-Tree, BCube, and etc. A consistent theme in these new architectures is that a large number of network devices are used to achieve 1:1 oversubscription ratio. However, at most time, data center traffic is far below the peak value. The idle network devices will waste significant amount of energy, which is now a headache for many data center owners. In this paper, we discuss how to save energy consumption in high-density data center networks in a routing perspective. We call this kind of routing energy-aware routing. The key idea is to use as few network devices to provide the routing service as possible, with no little sacrifice on the network performance. Meanwhile, the idle network devices can be shutdown or put into sleep mode for energy saving. We establish the model of energy-aware routing in data center network, and design a heuristic algorithm to achieve the idea. Our simulation in typical data center networks shows that energy-aware routing can effectively save power consumed by network devices.",
"Driven by increasingly unbalanced technology scaling and power dissipation limits, microprocessor designers have resorted to increasing the number of cores on a single chip, and pundits expect 1000-core designs to materialize in the next few years [1]. But how will memory architectures scale and how will these next-generation multicores be programmed? One barrier to scaling current memory architectures is the offchip memory bandwidth wall [1,2]: off-chip bandwidth grows with package pin density, which scales much more slowly than on-die transistor density [3]. To reduce reliance on external memories and keep data on-chip, today’s multicores integrate very large shared last-level caches on chip [4]; interconnects used with such shared caches, however, do not scale beyond relatively few cores, and the power requirements and access latencies of large caches exclude their use in chips on a 1000-core scale. For massive-scale multicores, then, we are left with relatively small per-core caches. Per-core caches on a 1000-core scale, in turn, raise the question of memory coherence. On the one hand, a shared memory abstraction is a practical necessity for general-purpose programming, and most programmers prefer a shared memory model [5]. On the other hand, ensuring coherence among private caches is an expensive proposition: bus-based and snoopy protocols don’t scale beyond relatively few cores, and directory sizes needed in cache-coherence protocols must equal a significant portion of the combined size of the per-core caches as otherwise directory evictions will limit performance [6]. Moreover, directory-based coherence protocols are notoriously difficult to implement and verify [7].",
"Networks are a shared resource connecting critical IT infrastructure, and the general practice is to always leave them on. Yet, meaningful energy savings can result from improving a network's ability to scale up and down, as traffic demands ebb and flow. We present ElasticTree, a network-wide power1 manager, which dynamically adjusts the set of active network elements -- links and switches--to satisfy changing data center traffic loads. We first compare multiple strategies for finding minimum-power network subsets across a range of traffic patterns. We implement and analyze ElasticTree on a prototype testbed built with production OpenFlow switches from three network vendors. Further, we examine the trade-offs between energy efficiency, performance and robustness, with real traces from a production e-commerce website. Our results demonstrate that for data center workloads, ElasticTree can save up to 50 of network energy, while maintaining the ability to handle traffic surges. Our fast heuristic for computing network subsets enables ElasticTree to scale to data centers containing thousands of nodes. We finish by showing how a network admin might configure ElasticTree to satisfy their needs for performance and fault tolerance, while minimizing their network power bill.",
"We consider policies for CPU load balancing in networks of workstations. We address the question of whether preemptive migration (migrating active processes) is necessary, or whether remote execution (migrating processes only at the time of birth) is sufficient for load balancing. We show that resolving this issue is strongly tied to understanding the process lifetime distribution. Our measurements indicate that the distribution of lifetimes for a UNIX process is Pareto (heavy-tailed), with a consistent functional form over a variety of workloads. We show how to apply this distribution to derive a preemptive migration policy that requires no hand-tuned parameters. We used a trace-driven simulation to show that our preemptive migration strategy is far more effective than remote execution, even when the memory transfer cost is high."
]
}
|
1309.5826
|
2273168557
|
This paper investigates the opportunities and limitations of adaptive virtual machine (VM) migration to reduce communication costs in a virtualized environment. We introduce a new formal model for the problem of online VM migration in two scenarios: (1) VMs can be migrated arbitrarily in the substrate network; e.g., a private cloud provider may have an incentive to reduce the overall communication cost in the network. (2) VMs can only be migrated within a given tenant; e.g., a user that was assigned a set of physical machines may exchange the functionality of the VMs on these machines. We propose a simple class of Destination-Swap algorithms which are based on an aggressive collocation strategy (inspired by splay datastructures) and which maintain a minimal and local amount of per-node (amortized cost) information to decide where to migrate a VM and how; thus, the algorithms react quickly to changes in the load. The algorithms come in two main flavors, an indirect and distributed variant which keeps existing VM placements local, and a direct variant which keeps the number of affected VMs small. We show that naturally, inter-tenant optimizations yield a larger potential for optimization, but generally also a tenant itself can improve its embedding. Moreover, there exists an interesting tradeoff between direct and indirect strategies: indirect variants are preferable under skewed and sparse communication patterns due to their locality properties.
|
VM migration in cloud computing has been proposed to improve resource utilization as well as to balance load and alleviate hotspots @cite_15 , or even save energy @cite_3 . For example, the VMware Distributed Resource Scheduler uses live migration to balance load in response to CPU and memory contention.
|
{
"cite_N": [
"@cite_15",
"@cite_3"
],
"mid": [
"1552771324",
"148964820"
],
"abstract": [
"Virtualization can deliver significant benefits for cloud computing by enabling VM migration to improve utilization, balance load and alleviate hotspots. While several mechanisms exist to migrate VMs, few efforts have focused on optimizing migration policies in a multi-rooted tree datacenter network. The general problem has multiple facets, two of which map to generalizations of well-studied problems: (1) Migration of VMs in a bandwidth-oversubscribed tree network generalizes the maximum multicommodity flow problem in a tree, and (2) Migrations must meet load constraints at the servers, mapping to variants of the matching problem --- generalized assignment and demand matching. While these problems have been individually studied, a new fundamental challenge is to simultaneously handle the packing constraints of server load and tree edge capacities. We give approximation algorithms for several versions of this problem, where the objective is to alleviate a maximal number of hot servers. In the full version of this work [5], we empirically demonstrate the effectiveness of these algorithms through large scale simulations on real data.",
"Networks are a shared resource connecting critical IT infrastructure, and the general practice is to always leave them on. Yet, meaningful energy savings can result from improving a network's ability to scale up and down, as traffic demands ebb and flow. We present ElasticTree, a network-wide power1 manager, which dynamically adjusts the set of active network elements -- links and switches--to satisfy changing data center traffic loads. We first compare multiple strategies for finding minimum-power network subsets across a range of traffic patterns. We implement and analyze ElasticTree on a prototype testbed built with production OpenFlow switches from three network vendors. Further, we examine the trade-offs between energy efficiency, performance and robustness, with real traces from a production e-commerce website. Our results demonstrate that for data center workloads, ElasticTree can save up to 50 of network energy, while maintaining the ability to handle traffic surges. Our fast heuristic for computing network subsets enables ElasticTree to scale to data centers containing thousands of nodes. We finish by showing how a network admin might configure ElasticTree to satisfy their needs for performance and fault tolerance, while minimizing their network power bill."
]
}
|
1309.5826
|
2273168557
|
This paper investigates the opportunities and limitations of adaptive virtual machine (VM) migration to reduce communication costs in a virtualized environment. We introduce a new formal model for the problem of online VM migration in two scenarios: (1) VMs can be migrated arbitrarily in the substrate network; e.g., a private cloud provider may have an incentive to reduce the overall communication cost in the network. (2) VMs can only be migrated within a given tenant; e.g., a user that was assigned a set of physical machines may exchange the functionality of the VMs on these machines. We propose a simple class of Destination-Swap algorithms which are based on an aggressive collocation strategy (inspired by splay datastructures) and which maintain a minimal and local amount of per-node (amortized cost) information to decide where to migrate a VM and how; thus, the algorithms react quickly to changes in the load. The algorithms come in two main flavors, an indirect and distributed variant which keeps existing VM placements local, and a direct variant which keeps the number of affected VMs small. We show that naturally, inter-tenant optimizations yield a larger potential for optimization, but generally also a tenant itself can improve its embedding. Moreover, there exists an interesting tradeoff between direct and indirect strategies: indirect variants are preferable under skewed and sparse communication patterns due to their locality properties.
|
Seamless VM migration can be implemented in different ways. One may @cite_17 the VM memory to the destination host before releasing the VM at the source, or defer the memory transfer until the processor state has been sent to the destination (the so-called @cite_9 approach). @cite_25 investigate automated black-box, gray-box, and hybrid strategies for VM migration in datacenters. Researchers also conduct measurements and derive models for VM migration costs, e.g., under Web 2.0 workloads and quality-of-service sensitive applications. @cite_5
|
{
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_25",
"@cite_17"
],
"mid": [
"2021204974",
"2143818983",
"206379916",
"1997269120"
],
"abstract": [
"Virtualization has become commonplace in modern data centers, often referred as \"computing clouds\". The capability of virtual machine live migration brings benefits such as improved performance, manageability and fault tolerance, while allowing workload movement with a short service downtime. However, service levels of applications are likely to be negatively affected during a live migration. For this reason, a better understanding of its effects on system performance is desirable. In this paper, we evaluate the effects of live migration of virtual machines on the performance of applications running inside Xen VMs. Results show that, in most cases, migration overhead is acceptable but cannot be disregarded, especially in systems where availability and responsiveness are governed by strict Service Level Agreements. Despite that, there is a high potential for live migration applicability in data centers serving modern Internet applications. Our results are based on a workload covering the domain of multi-tier Web 2.0 applications.",
"We present the design, implementation, and evaluation of post-copy based live migration for virtual machines (VMs) across a Gigabit LAN. Live migration is an indispensable feature in today's virtualization technologies. Post-copy migration defers the transfer of a VM's memory contents until after its processor state has been sent to the target host. This deferral is in contrast to the traditional pre-copy approach, which first copies the memory state over multiple iterations followed by a final transfer of the processor state. The post-copy strategy can provide a \"win-win\" by reducing total migration time closer to its equivalent time achieved by non-live VM migration. This is done while maintaining the liveness benefits of the pre-copy approach. We compare post-copy extensively against the traditional pre-copy approach on top of the Xen Hypervisor. Using a range of VM workloads we show improvements in several migration metrics including pages transferred, total migration time and network overhead. We facilitate the use of post-copy with adaptive pre-paging in order to eliminate all duplicate page transmissions. Our implementation is able to reduce the number of network-bound page faults to within 21 of the VM's working set for large workloads. Finally, we eliminate the transfer of free memory pages in both migration schemes through a dynamic self-ballooning (DSB) mechanism. DSB periodically releases free pages in a guest VM back to the hypervisor and significantly speeds up migration with negligible performance degradation.",
"Virtualization can provide significant benefits in data centers by enabling virtual machine migration to eliminate hotspots. We present Sandpiper, a system that automates the task of monitoring and detecting hotspots, determining a new mapping of physical to virtual resources and initiating the necessary migrations. Sandpiper implements a black-box approach that is fully OS- and application-agnostic and a gray-box approach that exploits OS- and application-level statistics. We implement our techniques in Xen and conduct a detailed evaluation using a mix of CPU, network and memory-intensive applications. Our results show that Sandpiper is able to resolve single server hotspots within 20 seconds and scales well to larger, data center environments. We also show that the gray-box approach can help Sandpiper make more informed decisions, particularly in response to memory pressure.",
"Migrating operating system instances across distinct physical hosts is a useful tool for administrators of data centers and clusters: It allows a clean separation between hard-ware and software, and facilitates fault management, load balancing, and low-level system maintenance. By carrying out the majority of migration while OSes continue to run, we achieve impressive performance with minimal service downtimes; we demonstrate the migration of entire OS instances on a commodity cluster, recording service downtimes as low as 60ms. We show that that our performance is sufficient to make live migration a practical tool even for servers running interactive loads. In this paper we consider the design options for migrating OSes running services with liveness constraints, focusing on data center and cluster environments. We introduce and analyze the concept of writable working set, and present the design, implementation and evaluation of high-performance OS migration built on top of the Xen VMM."
]
}
|
1309.5826
|
2273168557
|
This paper investigates the opportunities and limitations of adaptive virtual machine (VM) migration to reduce communication costs in a virtualized environment. We introduce a new formal model for the problem of online VM migration in two scenarios: (1) VMs can be migrated arbitrarily in the substrate network; e.g., a private cloud provider may have an incentive to reduce the overall communication cost in the network. (2) VMs can only be migrated within a given tenant; e.g., a user that was assigned a set of physical machines may exchange the functionality of the VMs on these machines. We propose a simple class of Destination-Swap algorithms which are based on an aggressive collocation strategy (inspired by splay datastructures) and which maintain a minimal and local amount of per-node (amortized cost) information to decide where to migrate a VM and how; thus, the algorithms react quickly to changes in the load. The algorithms come in two main flavors, an indirect and distributed variant which keeps existing VM placements local, and a direct variant which keeps the number of affected VMs small. We show that naturally, inter-tenant optimizations yield a larger potential for optimization, but generally also a tenant itself can improve its embedding. Moreover, there exists an interesting tradeoff between direct and indirect strategies: indirect variants are preferable under skewed and sparse communication patterns due to their locality properties.
|
VM migration has also been proposed for wide-area networks, where a lazy copyon reference can be used for moving VM disk state to reduce migration costs over weak links. @cite_1 In the wide-area, moving entire closer to the (mobile) users can reduce access latency @cite_28 , and there also exists work on the migration of entire @cite_30 which are latency-critical ( move-with-the-sun'') or latency-uncritical ( move-with-the-moon''). Also in the context of network virtualization, @cite_24 have shown that under certain circumstances, the migration of a Samba front-end server closer to the clients can be beneficial even for bulk-data applications.
|
{
"cite_N": [
"@cite_28",
"@cite_1",
"@cite_30",
"@cite_24"
],
"mid": [
"2056646448",
"2152276958",
"2050714144",
"2613890412"
],
"abstract": [
"Today's trend toward network virtualization and software-defined networking enables flexible new distributed systems where resources can be dynamically allocated and migrated to locations where they are most useful. This paper proposes a competitive analysis approach to design and reason about online algorithms that find a good tradeoff between the benefits and costs of a migratable service. A competitive online algorithm provides worst-case performance guarantees under any demand dynamics, and without any information or statistical assumptions on the demand in the future. This is attractive especially in scenarios where the demand is hard to predict and can be subject to unexpected events. As a case study, we describe a service (e.g., an SAP server or a gaming application) that uses network virtualization to improve the quality of service (QoS) experienced by thin client applications running on mobile devices. By decoupling the service from the underlying resource infrastructure, it can be migrated closer to the current client locations while taking into account migration costs. We identify the major cost factors in such a system and formalize the wide-area service migration problem. Our main contributions are a randomized and a deterministic online algorithm that achieve a competitive ratio of @math in a simplified scenario, where @math is the size of the substrate network. This is almost optimal. We complement our worst-case analysis with simulations in different specific scenarios and also sketch a migration demonstrator.",
"Virtual machine technology and the ease with which VMs can be migrated within the LAN, has changed the scope of resource management from allocating resources on a single server to manipulating pools of resources within a data center. We expect WAN migration of virtual machines to likewise transform the scope of provisioning compute resources from a single data center to multiple data centers spread across the country or around the world. In this paper we present the CloudNet architecure as a cloud framework consisting of cloud computing platforms linked with a VPN based network infrastructure to provide seamless and secure connectivity between enterprise and cloud data center sites. To realize our vision of efficiently pooling geographically distributed data center resources, CloudNet provides optimized support for live WAN migration of virtual machines. Specifically, we present a set of optimizations that minimize the cost of transferring storage and virtual machine memory during migrations over low bandwidth and high latency Internet links. We evaluate our system on an operational cloud platform distributed across the continental US. During simultaneous migrations of four VMs between data centers in Texas and Illinois, CloudNet's optimizations reduce memory migration time by 65 and lower bandwidth consumption for the storage and memory transfer by 19GB, a 50 reduction.",
"This paper attends to the problem of embedding flexibly specified virtual networks connecting cloud resources (e.g., storage or computation) on a given substrate (e.g., a data center, an ISP backbone, a router site, or a virtual provider network). We study a scenario where a substrate provider (or a potential intermediate broker or reseller) wants to optimize the embedding of these so-called by migrating them to more suitable locations. For instance, such re-embeddings can be useful if the were requested at short notice and initially placed heuristically. Subsequent optimizations can, e.g., reduce the peak resource loads in the network by spreading across the infrastructure or save energy by moving together and switching off unused components. We present the generic mathematical programming algorithm used in our prototype to compute optimal embeddings. For example, this algorithm supports different objective functions (such as load minimization or energy conservation), arbitrary resource combinations and the mapping of multiple virtual nodes of a to a single substrate node, cost-aware migrations, and it can deal with all link types that arise in practice (e.g., full-duplex or even wireless or wired broadcast links with multiple endpoints). Of course, such rigorous optimizations are time consuming, and we report on the time complexities obtained from our experiments with our network virtualization prototype architecture. It turns out that optimizing over moderate sized infrastructures is feasible, even for scenarios with high flexibility and without tuning the solver software to speed up computations further.",
"It is envisaged that services and applications will migrate to a cloud-computing paradigm where thin-clients on user devices access, over the network, applications hosted in data centers by application service providers. Examples are cloud based gaming applications and cloud-supported virtual desktops. For good performance and efficiency, it is critical that these services are delivered from locations that are the best for the current (dynamically changing) set of users. To achieve this, we expect that services will be hosted on virtual machines in interconnected data centers and that these virtual machines will migrate dynamically to locations best suited for the current user population. A basic network infrastructure need then is the ability to migrate virtual machines across multiple networks without losing service continuity. In this paper, we develop mechanisms to accomplish this using a network-virtualization architecture that relies on a set of distributed forwarding elements with centralized control (borrowing on several recent proposals in a similar vein). We describe a preliminary prototype system, built using Openflow components, that demonstrates the feasibility of this architecture in enabling seamless migration of virtual machines and in enhancing delivery of cloud-based services."
]
}
|
1309.5826
|
2273168557
|
This paper investigates the opportunities and limitations of adaptive virtual machine (VM) migration to reduce communication costs in a virtualized environment. We introduce a new formal model for the problem of online VM migration in two scenarios: (1) VMs can be migrated arbitrarily in the substrate network; e.g., a private cloud provider may have an incentive to reduce the overall communication cost in the network. (2) VMs can only be migrated within a given tenant; e.g., a user that was assigned a set of physical machines may exchange the functionality of the VMs on these machines. We propose a simple class of Destination-Swap algorithms which are based on an aggressive collocation strategy (inspired by splay datastructures) and which maintain a minimal and local amount of per-node (amortized cost) information to decide where to migrate a VM and how; thus, the algorithms react quickly to changes in the load. The algorithms come in two main flavors, an indirect and distributed variant which keeps existing VM placements local, and a direct variant which keeps the number of affected VMs small. We show that naturally, inter-tenant optimizations yield a larger potential for optimization, but generally also a tenant itself can improve its embedding. Moreover, there exists an interesting tradeoff between direct and indirect strategies: indirect variants are preferable under skewed and sparse communication patterns due to their locality properties.
|
In the theory community, many migration problem variants have been studied in the context of online page migration, and more generally, online metrical task systems @cite_11 . There exist several interesting results by on how to embed and migrate services to reduce load, e.g., @cite_0 @cite_16 . Indeed, given a fixed communication pattern between pairs of VMs, our work is related to classic graph embedding problems such as the (e.g., @cite_7 ) of a graph. Recently, such embedding problems have also been studied from the perspective of self-adjusting networks, in the context of distributed splay datastructures and peer-to-peer networks @cite_19 . Indeed, our algorithms are inspired by the classic splaying techniques introduced in the seminal work by Sleator and Tarjan @cite_14 on self-adjusting search trees, in the sense that we also aggressively migrate VMs closer together.
|
{
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_0",
"@cite_19",
"@cite_16",
"@cite_11"
],
"mid": [
"2130055503",
"2036322374",
"2121752873",
"2131175133",
"2063025533",
"1552828154"
],
"abstract": [
"The splay tree, a self-adjusting form of binary search tree, is developed and analyzed. The binary search tree is a data structure for representing tables and lists so that accessing, inserting, and deleting items is easy. On an n -node splay tree, all the standard search tree operations have an amortized time bound of O (log n ) per operation, where by “amortized time” is meant the time per operation averaged over a worst-case sequence of operations. Thus splay trees are as efficient as balanced trees when total running time is the measure of interest. In addition, for sufficiently long access sequences, splay trees are as efficient, to within a constant factor, as static optimum search trees. The efficiency of splay trees comes not from an explicit structural constraint, as with balanced trees, but from applying a simple restructuring heuristic, called splaying , whenever the tree is accessed. Extensions of splaying give simplified forms of two other data structures: lexicographic or multidimensional search trees and link cut trees.",
"Arora, Rao and Vazirani [2] showed that the standard semi-definite programming (SDP) relaxation of the Sparsest Cut problem with the triangle inequality constraints has an integrality gap of O(√log n). They conjectured that the gap is bounded from above by a constant. In this paper, we disprove this conjecture (referred to as the ARV-Conjecture) by constructing an Ω(log log n) integrality gap instance. Khot and Vishnoi [16] had earlier disproved the non-uniform version of the ARV-Conjecture.A simple \"stretching\" of the integrality gap instance for the Sparsest Cut problem serves as an Ω(log log n) integrality gap instance for the SDP relaxation of the Minimum Linear Arrangement problem. This SDP relaxation was considered in [6, 11], where it was shown that its integrality gap is bounded from above by O(√log n log log n).",
"We study a basic resource allocation problem that arises in cloud computing environments. The physical network of the cloud is represented as a graph with vertices denoting servers and edges corresponding to communication links. A workload is a set of processes with processing requirements and mutual communication requirements. The workloads arrive and depart over time, and the resource allocator must map each workload upon arrival to the physical network. We consider the objective of minimizing the congestion. We show that solving a subproblem about mapping a single workload to the physical graph essentially suffices to solve the general problem. In particular, an α-approximation for this single mapping problem gives an O(α log nD)-competitive algorithm for the general problem, where n is the number of nodes in the physical network and D is the maximum to minimum workload duration ratio. We also show how to solve the single mapping problem for two natural class of workloads, namely depth-d-trees and complete-graph workloads. For depth-d tree, we give an nO(d) time O(d2 log (nd))-approximation based on a strong LP relaxation inspired by the Sherali-Adams hierarchy.",
"This paper initiates the study of self-adjusting networks (or distributed data structures) whose topologies dynamically adapt to a communication pattern σ. We present a fully decentralized self-adjusting solution called SplayNet. A SplayNet is a distributed generalization of the classic splay tree concept. It ensures short paths (which can be found using local-greedy routing) between communication partners while minimizing topological rearrangements. We derive an upper bound for the amortized communication cost of a SplayNet based on empirical entropies of σ, and show that SplayNets have several interesting convergence properties. For instance, SplayNets features a provable online optimality under special requests scenarios. We also investigate the optimal static network and prove different lower bounds for the average communication cost based on graph cuts and on the empirical entropy of the communication pattern σ. From these lower bounds it follows, e.g., that SplayNets are optimal in scenarios where the requests follow a product distribution as well. Finally, this paper shows that in contrast to the Minimum Linear Arrangement problem which is generally NP-hard, the optimal static tree network can be computed in polynomial time for any guest graph, despite the exponentially large graph family. We complement our formal analysis with a small simulation study on a Facebook graph.",
"The recent growing popularity of cloud-based solutions and the variety of new applications present new challenges for cloud management and resource utilization. In this paper we concentrate on the networking aspect and consider the placement problem of virtual machines (VMs) of applications with intense bandwidth requirements. Optimizing the available network bandwidth is far more complex than optimizing resources like memory or CPU, since every network link may be used by many physical hosts and thus by the VMs residing in these hosts. We focus on maximizing the benefit from the overall communication sent by the VMs to a single designated point in the data center (called the root). This is the typical case when considering a storage area network of applications with intense storage requirements. We formulate a bandwidth-constrained VM placement optimization problem that models this setting. This problem is NP hard, and we present a polynomial-time constant approximation algorithm for its most general version, in which hosts are connected to the root by a general network graph. For more practical cases, in which the network topology is a tree and the revenue is a simple function of the allocated bandwidth, we present improved approximation algorithms that are more efficient in terms of running time. We evaluate the expected performance of our proposed algorithms through a simulation study over traces from a real production data center, providing strong indications to the superiority of our proposed solutions.",
"Preface 1. Introduction to competitive analysis: the list accessing problem 2. Introduction to randomized algorithms: the list accessing problem 3. Paging: deterministic algorithms 4. Paging: randomized algorithms 5. Alternative models for paging: beyond pure competitive analysis 6. Game theoretic foundations 7. Request - answer games 8. Competitive analysis and zero-sum games 9. Metrical task systems 10. The k-server problem 11. Randomized k-server algorithms 12. Load-balancing 13. Call admission and circuit-routing 14. Search, trading and portfolio selection 15. Competitive analysis and decision making under uncertainty Appendices Bibliography Index."
]
}
|
1309.5551
|
185349370
|
Design space exploration is commonly performed in embedded system, where the architecture is a complicated piece of engineering. With the current trend of many-core systems, design space exploration in general-purpose computers can no longer be avoided. Microgrid is a complicated architecture, and therefor we need to perform design space exploration. Generally, simulators are used for the design space exploration of an architecture. Different simulators with different levels of complexity, simulation time and accuracy are used. Simulators with little complexity, low simulation time and reasonable accuracy are desirable for the design space exploration of an architecture. These simulators are referred as high-level simulators and are commonly used in the design of embedded systems. However, the use of high-level simulation for design space exploration in general-purpose computers is a relatively new area of research.
|
FPGA prototypes: They have low little simulation time, high accuracy and are useful in DSE. However these simulations require more development time and are more complex. They also suffer from combinatoric explosion of considering many low level parameters during design space exploration. Some examples are: @cite_38 @cite_16 @cite_40 @cite_24 .
|
{
"cite_N": [
"@cite_24",
"@cite_38",
"@cite_40",
"@cite_16"
],
"mid": [
"2066339098",
"2108204620",
"",
"2130080799"
],
"abstract": [
"The RAMP project's goal is to enable the intensive, multidisciplinary innovation that the computing industry will need to tackle the problems of parallel processing. RAMP itself is an open-source, community-developed, FPGA-based emulator of parallel architectures. its design framework lets a large, collaborative community develop and contribute reusable, composable design modules. three complete designs - for transactional memory, distributed systems, and distributed-shared memory - demonstrate the platform's potential.",
"In this paper we explore microprocessor performance models implemented on FPGAs. While FPGAs can help with simulation speed, the increased implementation complexity can degrade model development time. We assess whether a simulator split into closely-coupled timing and functional partitions can address this by easing the development of timing models while retaining fine-grained parallelism. We give the semantics of our simulator partitioning, and discuss the architecture of its implementation on an FPGA. We describe how three timing models of vastly different target processors can use the same functional partition, and assess their performance.",
"",
"Simulation is an important means of evaluating new microarchitectures. Current trends toward chip multiprocessors (CMPs) try the ability of designers to develop efficient simulators. CMP simulation speed can be improved by exploiting parallelism in the CMP simulation model. This may be done by either running the simulation on multiple processors or by integrating multiple processors into the simulation to replace simulated processors. Doing so usually requires tedious manual parallelization or re-design to encapsulate processors. This paper presents techniques to perform automated simulator parallelization and hardware integration for CMP structural models. We show that automated parallelization can achieve an 7.60 speedup for a 16-processor CMP model on a conventional 4-processor shared-memory multiprocessor. We demonstrate the power of hardware integration by integrating eight hardware PowerPC cores into a CMP model, achieving a speedup of up to 5.82."
]
}
|
1309.5391
|
2949286957
|
Colour is a key component in the successful dissemination of information. Since many real-world concepts are associated with colour, for example danger with red, linguistic information is often complemented with the use of appropriate colours in information visualization and product marketing. Yet, there is no comprehensive resource that captures concept-colour associations. We present a method to create a large word-colour association lexicon by crowdsourcing. A word-choice question was used to obtain sense-level annotations and to ensure data quality. We focus especially on abstract concepts and emotions to show that even they tend to have strong colour associations. Thus, using the right colours can not only improve semantic coherence, but also inspire the desired emotional response.
|
The relation between language and cognition has received considerable attention over the years, mainly on answering whether language impacts thought, and if so, to what extent. Experiments with colour categories have been used both to show that language has an effect on thought @cite_8 @cite_3 and that it does not @cite_0 . However, that line of work does not explicitly deal with word--colour associations. In fact, we did not find any other academic work that gathered large word--colour associations. There is, however, a commercial endeavor---Cymbolism http: www.cymbolism.com about .
|
{
"cite_N": [
"@cite_0",
"@cite_3",
"@cite_8"
],
"mid": [
"2025065221",
"",
"1985177116"
],
"abstract": [
"Abstract Among the semantics of different common attributes of the environment, consistent and correct color naming seems to develop paradoxically late in children, for even young infants discriminate and categorize colors well, color is a salient feature of the child's world, and children are aware of color as a separate domain, know color terms, and respond to color questions with color names. Several specific anomalies characterize early linguistic development in this domain: Errors in beginning color naming are perseverative or random, there seems to be a minimum age for correct and consistent color naming, developmental rate shows wide individual differences, and acquisition among girls is generally faster than among boys. This essay reviews data that support these observations, and evaluates three traditional explanations for them—including the perceptual salience of color for children, experience and learning in the child, and cognitive development—against a fourth new possibility. It is hypothesized that appropriate color naming may depend on the maturation and integration of specific cortical neurological structures; among several interpretations, the neurological one accounts best for diverse characteristics of early color naming",
"",
"“The Whorf thesis on the relationship between language and thought is found to involve the following two propositions: (a) Different linguistic communities perceive and conceive reality in different ways. (b) The language spoken in a community helps to shape the cognitive structure of the individual"
]
}
|
1309.5391
|
2949286957
|
Colour is a key component in the successful dissemination of information. Since many real-world concepts are associated with colour, for example danger with red, linguistic information is often complemented with the use of appropriate colours in information visualization and product marketing. Yet, there is no comprehensive resource that captures concept-colour associations. We present a method to create a large word-colour association lexicon by crowdsourcing. A word-choice question was used to obtain sense-level annotations and to ensure data quality. We focus especially on abstract concepts and emotions to show that even they tend to have strong colour associations. Thus, using the right colours can not only improve semantic coherence, but also inspire the desired emotional response.
|
The MRC Psycholinguistic Database @cite_1 has, among other information, the imageability ratings for 9240 words. http: www.psy.uwa.edu.au mrcdatabase uwa .htm The imageability rating is a score given by human judges that reflects how easy it is to visualize the concept. It is a scale from 100 (very hard to visualize) to 700 (very easy to visualize). We use the ratings in our experiments to determine whether there is a correlation between imageability and strength of colour association.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"1985449126"
],
"abstract": [
"This paper describes a computerised database of psycholinguistic information. Semantic, syntactic, phonological and orthographic information about some or all of the 98,538 words in the database is accessible, by using a specially-written and very simple programming language. Word-association data are also included in the database. Some examples are given of the use of the database for selection of stimuli to be used in psycholinguistic experimentation or linguistic research."
]
}
|
1309.5124
|
2009855072
|
Modern social networks frequently encompass multiple distinct types of connectivity information; for instance, explicitly acknowledged friend relationships might complement behavioral measures that link users according to their actions or interests. One way to represent these networks is as multi-layer graphs, where each layer contains a unique set of edges over the same underlying vertices (users). Edges in different layers typically have related but distinct semantics; depending on the application multiple layers might be used to reduce noise through averaging, to perform multifaceted analyses, or a combination of the two. However, it is not obvious how to extend standard graph analysis techniques to the multi-layer setting in a flexible way. In this paper we develop latent variable models and methods for mining multi-layer networks for connectivity patterns based on noisy data.
|
The literature on single layer networks is large, with contributions coming from many different fields. There are many results on structural and spectral properties of a single-layer network, including community detection @cite_17 , random walk return times @cite_21 , and percolation theory results @cite_27 . Diffusion or infection models have also been studied in the context of complex networks (see @cite_26 , for instance).
|
{
"cite_N": [
"@cite_27",
"@cite_21",
"@cite_26",
"@cite_17"
],
"mid": [
"2124637492",
"1964062850",
"2054476043",
"2089458547"
],
"abstract": [
"The emergence of order in natural systems is a constant source of inspiration for both physical and biological sciences. While the spatial order characterizing for example the crystals has been the basis of many advances in contemporary physics, most complex systems in nature do not offer such high degree of order. Many of these systems form complex networks whose nodes are the elements of the system and edges represent the interactions between them. Traditionally complex networks have been described by the random graph theory founded in 1959 by Paul Erdohs and Alfred Renyi. One of the defining features of random graphs is that they are statistically homogeneous, and their degree distribution (characterizing the spread in the number of edges starting from a node) is a Poisson distribution. In contrast, recent empirical studies, including the work of our group, indicate that the topology of real networks is much richer than that of random graphs. In particular, the degree distribution of real networks is a power-law, indicating a heterogeneous topology in which the majority of the nodes have a small degree, but there is a significant fraction of highly connected nodes that play an important role in the connectivity of the network. The scale-free topology of real networks has very important consequences on their functioning. For example, we have discovered that scale-free networks are extremely resilient to the random disruption of their nodes. On the other hand, the selective removal of the nodes with highest degree induces a rapid breakdown of the network to isolated subparts that cannot communicate with each other. The non-trivial scaling of the degree distribution of real networks is also an indication of their assembly and evolution. Indeed, our modeling studies have shown us that there are general principles governing the evolution of networks. Most networks start from a small seed and grow by the addition of new nodes which attach to the nodes already in the system. This process obeys preferential attachment: the new nodes are more likely to connect to nodes with already high degree. We have proposed a simple model based on these two principles wich was able to reproduce the power-law degree distribution of real networks. Perhaps even more importantly, this model paved the way to a new paradigm of network modeling, trying to capture the evolution of networks, not just their static topology.",
"We investigate random walks on complex networks and derive an exact expression for the mean firstpassage time (MFPT) between two nodes. We introduce for each node the random walk centrality C, which is the ratio between its coordination number and a characteristic relaxation time, and show that it determines essentially the MFPT. The centrality of a node determines the relative speed by which a node can receive and spread information over the network in a random process. Numerical simulations of an ensemble of random walkers moving on paradigmatic network models confirm this analytical prediction.",
"Online social networks play a major role in the spread of information at very large scale. A lot of effort have been made in order to understand this phenomenon, ranging from popular topic detection to information diffusion modeling, including influential spreaders identification. In this article, we present a survey of representative methods dealing with these issues and propose a taxonomy that summarizes the state-of-the-art. The objective is to provide a comprehensive analysis and guide of existing efforts around information diffusion in social networks. This survey is intended to help researchers in quickly understanding existing works and possible improvements to bring.",
"Many networks display community structure---groups of vertices within which connections are dense but between which they are sparser---and sensitive computer algorithms have in recent years been developed for detecting this structure. These algorithms, however, are computationally demanding, which limits their application to small networks. Here we describe an algorithm which gives excellent results when tested on both computer-generated and real-world networks and is much faster, typically thousands of times faster, than previous algorithms. We give several example applications, including one to a collaboration network of more than 50 000 physicists."
]
}
|
1309.5124
|
2009855072
|
Modern social networks frequently encompass multiple distinct types of connectivity information; for instance, explicitly acknowledged friend relationships might complement behavioral measures that link users according to their actions or interests. One way to represent these networks is as multi-layer graphs, where each layer contains a unique set of edges over the same underlying vertices (users). Edges in different layers typically have related but distinct semantics; depending on the application multiple layers might be used to reduce noise through averaging, to perform multifaceted analyses, or a combination of the two. However, it is not obvious how to extend standard graph analysis techniques to the multi-layer setting in a flexible way. In this paper we develop latent variable models and methods for mining multi-layer networks for connectivity patterns based on noisy data.
|
Recently, there has been a growing interest in the multi-level network problem. Some basic network properties have been extended to the multilevel structure @cite_5 , @cite_24 as well as some results that serve as an extension of single layer concepts, such as multi-level network growth @cite_2 and spreading of epidemics @cite_15 . The metrics that have been proposed attempt to incorporate the dependence of the layers into the statistical framework, which allows for a much richer view of the network. In the same vein, the approach described in this paper performs parameter inference on a multi-level network, incorporating some of the dependence information that the multi-level structure allows.
|
{
"cite_N": [
"@cite_24",
"@cite_5",
"@cite_15",
"@cite_2"
],
"mid": [
"2197583351",
"2402817395",
"2052838081",
"2031487654"
],
"abstract": [
"There is growing interest in multiplex networks where individual nodes take part in several layers of networks simultaneously. This is the case for example in social networks where each individual node has different kind of social ties or transportation systems where each location is connected to another location by different types of transport. Many of these multiplex are characterized by a significant overlap of the links in different layers. In this paper we introduce a statistical mechanics framework to describe multiplex ensembles. A multiplex is a system formed by N nodes and M layers of interactions where each node belongs to the M layers at the same time. Each layer @math is formed by a network @math . Here we introduce the concept of correlated multiplex ensembles in which the existence of a link in one layer is correlated with the existence of a link in another layer. This implies that a typical multiplex of the ensemble can have a significant overlap of the links in the different layers. Moreover we characterize microcanonical and canonical multiplex ensembles satisfying respectively hard and soft constraints and we discuss how to construct multiplex in these ensembles. Finally we provide the expression for the entropy of these ensembles that can be useful to address different inference problems involving multiplexes.",
"",
"Many real networks are not isolated from each other but form networks of networks, often interrelated in non trivial ways. Here, we analyze an epidemic spreading process taking place on top of two interconnected complex networks. We develop a heterogeneous mean field approach that allows us to calculate the conditions for the emergence of an endemic state. Interestingly, a global endemic state may arise in the coupled system even though the epidemics is not able to propagate on each network separately, and even when the number of coupling connections is small. Our analytic results are successfully confronted against large-scale numerical simulations.",
"We propose a modeling framework for growing multiplexes where a node can belong to different networks. We define new measures for multiplexes and we identify a number of relevant ingredients for modeling their evolution such as the coupling between the different layers and the arrival time distribution of nodes. The topology of the multiplex changes significantly in the different cases under consideration, with effects of the arrival time of nodes on the degree distribution, average shortest paths and interdependence."
]
}
|
1309.5124
|
2009855072
|
Modern social networks frequently encompass multiple distinct types of connectivity information; for instance, explicitly acknowledged friend relationships might complement behavioral measures that link users according to their actions or interests. One way to represent these networks is as multi-layer graphs, where each layer contains a unique set of edges over the same underlying vertices (users). Edges in different layers typically have related but distinct semantics; depending on the application multiple layers might be used to reduce noise through averaging, to perform multifaceted analyses, or a combination of the two. However, it is not obvious how to extend standard graph analysis techniques to the multi-layer setting in a flexible way. In this paper we develop latent variable models and methods for mining multi-layer networks for connectivity patterns based on noisy data.
|
Bayesian model averaging is also related to this work; ideas from BMA are used to create conditional independence between the layers of a network @cite_28 . This framework accounts for the interdependent relationships between the multiple layers into latent variables, which can then be estimated.
|
{
"cite_N": [
"@cite_28"
],
"mid": [
"2089763487"
],
"abstract": [
"It is argued that P-values and the tests based upon them give unsatisfactory results, especially in large samples. It is shown that, in regression, when there are many candidate independent variables, standard variable selection procedures can give very misleading results. Also, by selecting a single model, they ignore model uncertainty and so underestimate the uncertainty about quantities of interest. The Bayesian approach to hypothesis testing, model selection, and accounting for model uncertainty is presented. Implementing this is straightforward through the use of the simple and accurate BIC approximation, and it can be done using the output from standard software. Specific results are presented for most of the types of model commonly used in sociology. It is shown that this approach overcomes the difficulties with P-values and standard model selection procedures based on them. It also allows easy comparison of nonnested models, and permits the quantification of the evidence for a null hypothesis of interest, such as a convergence theory or a hypothesis about societ al norms."
]
}
|
1309.5344
|
2951751593
|
Two-factor authentication (2F) aims to enhance resilience of password-based authentication by requiring users to provide an additional authentication factor, e.g., a code generated by a security token. However, it also introduces non-negligible costs for service providers and requires users to carry out additional actions during the authentication process. In this paper, we present an exploratory comparative study of the usability of 2F technologies. First, we conduct a pre-study interview to identify popular technologies as well as contexts and motivations in which they are used. We then present the results of a quantitative study based on a survey completed by 219 Mechanical Turk users, aiming to measure the usability of three popular 2F solutions: codes generated by security tokens, one-time PINs received via email or SMS, and dedicated smartphone apps (e.g., Google Authenticator). We record contexts and motivations, and study their impact on perceived usability. We find that 2F technologies are overall perceived as usable, regardless of motivation and or context of use. We also present an exploratory factor analysis, highlighting that three metrics -- ease-of-use, required cognitive efforts, and trustworthiness -- are enough to capture key factors affecting 2F usability.
|
Adams and Sasse @cite_15 showed that, for users, security is not a primary task, thus users feel under attack by capricious'' password polices. Password policies often mandate the use of long (and hard-to-remember) passwords, frequent password changes, and using different passwords across different services. This ultimately drives the user to find the simplest password that barely complies with requirements @cite_15 . Inglesant and Sasse @cite_22 analyzed password diaries'', i.e., they asked users to record the times they authenticated via passwords, and found that frequent password changes are a burden, users do not change passwords unless forced to, and that it is difficult for them to create memorable, secure passwords adhering to the policy. They also concluded that context of use has a significant impact on the ability of users to become familiar with complex passwords and, essentially, on their usability.
|
{
"cite_N": [
"@cite_15",
"@cite_22"
],
"mid": [
"2037202491",
"2150341374"
],
"abstract": [
"Many system security departments treat users as a security risk to be controlled. The general consensus is that most users are careless and unmotivated when it comes to system security. In a recent study, we found that users may indeed compromise computer security mechanisms, such as password authentication, both knowing and unknowingly. A closer analysis, however, revealed that such behavior is often caused by the way in which security mechanisms are implemented, and users’ lack of knowledge. We argue that to change this state of affairs, security departments need to communicate more with users, and adopt a usercentered design approach.",
"HCI research published 10 years ago pointed out that many users cannot cope with the number and complexity of passwords, and resort to insecure workarounds as a consequence. We present a study which re-examined password policies and password practice in the workplace today. 32 staff members in two organisations kept a password diary for 1 week, which produced a sample of 196 passwords. The diary was followed by an interview which covered details of each password, in its context of use. We find that users are in general concerned to maintain security, but that existing security policies are too inflexible to match their capabilities, and the tasks and contexts in which they operate. As a result, these password policies can place demands on users which impact negatively on their productivity and, ultimately, that of the organisation. We conclude that, rather than focussing password policies on maximizing password strength and enforcing frequency alone, policies should be designed using HCI principles to help the user to set an appropriately strong password in a specific context of use."
]
}
|
1309.5344
|
2951751593
|
Two-factor authentication (2F) aims to enhance resilience of password-based authentication by requiring users to provide an additional authentication factor, e.g., a code generated by a security token. However, it also introduces non-negligible costs for service providers and requires users to carry out additional actions during the authentication process. In this paper, we present an exploratory comparative study of the usability of 2F technologies. First, we conduct a pre-study interview to identify popular technologies as well as contexts and motivations in which they are used. We then present the results of a quantitative study based on a survey completed by 219 Mechanical Turk users, aiming to measure the usability of three popular 2F solutions: codes generated by security tokens, one-time PINs received via email or SMS, and dedicated smartphone apps (e.g., Google Authenticator). We record contexts and motivations, and study their impact on perceived usability. We find that 2F technologies are overall perceived as usable, regardless of motivation and or context of use. We also present an exploratory factor analysis, highlighting that three metrics -- ease-of-use, required cognitive efforts, and trustworthiness -- are enough to capture key factors affecting 2F usability.
|
@cite_17 discussed burdens on nursing staff created by hard-to-remember passwords in conjunction with frequent logouts required by healthcare security standards, such as the Health Insurance Portability and Accountability Act (HIPAA).
|
{
"cite_N": [
"@cite_17"
],
"mid": [
"2147201525"
],
"abstract": [
"Logging in by typing usernames and passwords is by far the most common way to access modern computer systems. However, such contemporary user authentication mechanisms are inappropriate in a ubiquitous computing environment, where users constantly are accessing a wide range of different devices. This paper introduces new concepts for user authentication in ubiquitous computing, such as the notion of proximity-based user authentication and silent login. The design of these new mechanisms is part of the design of a ubiquitous computing infrastructure for hospitals, which is grounded in field studies of medical work in hospitals. The paper reports from field studies of clinicians using an electronic patient record (EPR) and describes severe usability problems associated with its login procedures. The EPR’s login mechanisms do not recognize the nature of medical work as being nomadic, interrupted, and cooperative around sharing common material. The consequence is that login is circumvented and security is jeopardized."
]
}
|
1309.5344
|
2951751593
|
Two-factor authentication (2F) aims to enhance resilience of password-based authentication by requiring users to provide an additional authentication factor, e.g., a code generated by a security token. However, it also introduces non-negligible costs for service providers and requires users to carry out additional actions during the authentication process. In this paper, we present an exploratory comparative study of the usability of 2F technologies. First, we conduct a pre-study interview to identify popular technologies as well as contexts and motivations in which they are used. We then present the results of a quantitative study based on a survey completed by 219 Mechanical Turk users, aiming to measure the usability of three popular 2F solutions: codes generated by security tokens, one-time PINs received via email or SMS, and dedicated smartphone apps (e.g., Google Authenticator). We record contexts and motivations, and study their impact on perceived usability. We find that 2F technologies are overall perceived as usable, regardless of motivation and or context of use. We also present an exploratory factor analysis, highlighting that three metrics -- ease-of-use, required cognitive efforts, and trustworthiness -- are enough to capture key factors affecting 2F usability.
|
The impact on usability and security of password composition policies has also been studied. For instance, @cite_3 showed that complex password policies can actually average password entropy, and that a 16-character with no additional requirements provided the highest average entropy per password. @cite_1 found that for important'' accounts, a password meter (i.e., a visual clue on password's strength) successfully helps increase entropy.
|
{
"cite_N": [
"@cite_1",
"@cite_3"
],
"mid": [
"2134080857",
"2113266120"
],
"abstract": [
"Password meters tell users whether their passwords are \"weak\" or \"strong.\" We performed a laboratory experiment to examine whether these meters influenced users' password selections when they were forced to change their real passwords, and when they were not told that their passwords were the subject of a study. We observed that the presence of meters yielded significantly stronger passwords. We performed a followup field experiment to test a different scenario: creating a password for an unimportant account. In this scenario, we found that the meters made no observable difference: participants simply reused weak passwords that they used to protect similar low-risk accounts. We conclude that meters result in stronger passwords when users are forced to change existing passwords on \"important\" accounts and that individual meter design decisions likely have a marginal impact.",
"Text-based passwords are the most common mechanism for authenticating humans to computer systems. To prevent users from picking passwords that are too easy for an adversary to guess, system administrators adopt password-composition policies (e.g., requiring passwords to contain symbols and numbers). Unfortunately, little is known about the relationship between password-composition policies and the strength of the resulting passwords, or about the behavior of users (e.g., writing down passwords) in response to different policies. We present a large-scale study that investigates password strength, user behavior, and user sentiment across four password-composition policies. We characterize the predictability of passwords by calculating their entropy, and find that a number of commonly held beliefs about password composition and strength are inaccurate. We correlate our results with user behavior and sentiment to produce several recommendations for password-composition policies that result in strong passwords without unduly burdening users."
]
}
|
1309.5344
|
2951751593
|
Two-factor authentication (2F) aims to enhance resilience of password-based authentication by requiring users to provide an additional authentication factor, e.g., a code generated by a security token. However, it also introduces non-negligible costs for service providers and requires users to carry out additional actions during the authentication process. In this paper, we present an exploratory comparative study of the usability of 2F technologies. First, we conduct a pre-study interview to identify popular technologies as well as contexts and motivations in which they are used. We then present the results of a quantitative study based on a survey completed by 219 Mechanical Turk users, aiming to measure the usability of three popular 2F solutions: codes generated by security tokens, one-time PINs received via email or SMS, and dedicated smartphone apps (e.g., Google Authenticator). We record contexts and motivations, and study their impact on perceived usability. We find that 2F technologies are overall perceived as usable, regardless of motivation and or context of use. We also present an exploratory factor analysis, highlighting that three metrics -- ease-of-use, required cognitive efforts, and trustworthiness -- are enough to capture key factors affecting 2F usability.
|
Another line of work has focused on password managers . @cite_16 compared the usability of two password managers (PwdHash and Password Multiplier), pointing to a few usability issues in both implementations and showing that users were often uncomfortable relinquishing control'' to password managers. @cite_6 studied the usability of three password managers (LastPass, KeePassMobile, and Roboform2Go), with a focus on mobile phone users. They concluded that users preferred portable, stand-alone managers over cloud-based ones, despite the better usability of the latter, as they were not comfortable giving control of their passwords to an online entity.
|
{
"cite_N": [
"@cite_16",
"@cite_6"
],
"mid": [
"2129768690",
"1540780277"
],
"abstract": [
"We present a usability study of two recent password manager proposals: PwdHash (, 2005) and Password Multiplier (, 2005). Both papers considered usability issues in greater than typical detail, the former briefly reporting on a small usability study; both also provided implementations for download. Our study involving 26 users found that both proposals suffer from major usability problems. Some of these are not \"simply\" usability issues, but rather lead directly to security exposures. Not surprisingly, we found the most significant problems arose from users having inaccurate or incomplete mental models of the software. Our study revealed many interesting misunderstandings D for example, users reporting a task as easy even when unsuccessful at completing that task; and believing their passwords were being strengthened when in fact they had failed to engage the appropriate protection mechanism. Our findings also suggested that ordinary users would be reluctant to optin to using these managers: users were uncomfortable with \"relinquishing control\" of their passwords to a manager, did not feel that they needed the password managers, or that the managers provided greater security.",
"Proposed in response to the growing number of passwords users have to memorize, password managers allow to store one's credentials, either on a third-party server (online password manager), or on a portable device (portable password manager) such as a mobile phone or a USB key. In this paper, we present a comparative usability study of three popular password managers: an online manager (LastPass), a phone manager (KeePassMobile) and a USB manager (Roboform2Go). Our study provides valuable insights on average users' perception of security and usability of the three password management approaches. We find, contrary to our intuition, that users overall prefer the two portable managers over the online manager, despite the better usability of the latter. Also, surprisingly, our non-technical pool of users shows a strong inclination towards the phone manager. These findings can generally be credited to the fact that the users were not comfortable giving control of their passwords to an online entity and preferred to manage their passwords themselves on their own portable devices. Our results prompt the need for research on developing user-friendly and secure phone managers, owing to the ubiquity of mobile phones."
]
}
|
1309.5344
|
2951751593
|
Two-factor authentication (2F) aims to enhance resilience of password-based authentication by requiring users to provide an additional authentication factor, e.g., a code generated by a security token. However, it also introduces non-negligible costs for service providers and requires users to carry out additional actions during the authentication process. In this paper, we present an exploratory comparative study of the usability of 2F technologies. First, we conduct a pre-study interview to identify popular technologies as well as contexts and motivations in which they are used. We then present the results of a quantitative study based on a survey completed by 219 Mechanical Turk users, aiming to measure the usability of three popular 2F solutions: codes generated by security tokens, one-time PINs received via email or SMS, and dedicated smartphone apps (e.g., Google Authenticator). We record contexts and motivations, and study their impact on perceived usability. We find that 2F technologies are overall perceived as usable, regardless of motivation and or context of use. We also present an exploratory factor analysis, highlighting that three metrics -- ease-of-use, required cognitive efforts, and trustworthiness -- are enough to capture key factors affecting 2F usability.
|
Finally, @cite_0 evaluated, without conducting any user study, authentication schemes including: plain passwords, OpenID @cite_9 , security tokens, phone-based tokens, etc. They used a set of 25 subjective factors: 8 measuring usability, 6 measuring deployability, and 11 measuring security. Although they did not conduct any user study, authors concluded that: (i) no existing authentication scheme does best in all metrics, and (ii) technologies that one could classify as 2F do better than passwords in security but worse in usability.
|
{
"cite_N": [
"@cite_0",
"@cite_9"
],
"mid": [
"2030112111",
"2054702684"
],
"abstract": [
"We evaluate two decades of proposals to replace text passwords for general-purpose user authentication on the web using a broad set of twenty-five usability, deployability and security benefits that an ideal scheme might provide. The scope of proposals we survey is also extensive, including password management software, federated login protocols, graphical password schemes, cognitive authentication schemes, one-time passwords, hardware tokens, phone-aided schemes and biometrics. Our comprehensive approach leads to key insights about the difficulty of replacing passwords. Not only does no known scheme come close to providing all desired benefits: none even retains the full set of benefits that legacy passwords already provide. In particular, there is a wide range from schemes offering minor security benefits beyond legacy passwords, to those offering significant security benefits in return for being more costly to deploy or more difficult to use. We conclude that many academic proposals have failed to gain traction because researchers rarely consider a sufficiently wide range of real-world constraints. Beyond our analysis of current schemes, our framework provides an evaluation methodology and benchmark for future web authentication proposals.",
"With the advancement in user-centric and URI-based identity systems over the past two years, it has become clear that a single specification will not be the solution to all problems. Rather, like the other layers of the Internet, developing small, interoperable specifications that are independently implementable and useful will ultimately lead to market adoption of these technologies. This is the intent of the OpenID framework. OpenID Authentication 1.0 began as a lightweight HTTP-based URL authentication protocol. OpenID Authentication 2.0 it is now turning into an open community-driven platform that allows and encourages innovation. It supports both URLs and XRIs as user identifiers, uses Yadis XRDS documents for identity service discovery, adds stronger security, and supports both public and private identifiers. With continuing convergence under this broad umbrella, the OpenID framework is emerging as a viable solution for Internet-scale user-centric identity infrastructure."
]
}
|
1309.5344
|
2951751593
|
Two-factor authentication (2F) aims to enhance resilience of password-based authentication by requiring users to provide an additional authentication factor, e.g., a code generated by a security token. However, it also introduces non-negligible costs for service providers and requires users to carry out additional actions during the authentication process. In this paper, we present an exploratory comparative study of the usability of 2F technologies. First, we conduct a pre-study interview to identify popular technologies as well as contexts and motivations in which they are used. We then present the results of a quantitative study based on a survey completed by 219 Mechanical Turk users, aiming to measure the usability of three popular 2F solutions: codes generated by security tokens, one-time PINs received via email or SMS, and dedicated smartphone apps (e.g., Google Authenticator). We record contexts and motivations, and study their impact on perceived usability. We find that 2F technologies are overall perceived as usable, regardless of motivation and or context of use. We also present an exploratory factor analysis, highlighting that three metrics -- ease-of-use, required cognitive efforts, and trustworthiness -- are enough to capture key factors affecting 2F usability.
|
Although not directly related to our 2F study, we will use in our factor analysis some metrics introduced in the context of password replacements @cite_0 and password managers @cite_6 .
|
{
"cite_N": [
"@cite_0",
"@cite_6"
],
"mid": [
"2030112111",
"1540780277"
],
"abstract": [
"We evaluate two decades of proposals to replace text passwords for general-purpose user authentication on the web using a broad set of twenty-five usability, deployability and security benefits that an ideal scheme might provide. The scope of proposals we survey is also extensive, including password management software, federated login protocols, graphical password schemes, cognitive authentication schemes, one-time passwords, hardware tokens, phone-aided schemes and biometrics. Our comprehensive approach leads to key insights about the difficulty of replacing passwords. Not only does no known scheme come close to providing all desired benefits: none even retains the full set of benefits that legacy passwords already provide. In particular, there is a wide range from schemes offering minor security benefits beyond legacy passwords, to those offering significant security benefits in return for being more costly to deploy or more difficult to use. We conclude that many academic proposals have failed to gain traction because researchers rarely consider a sufficiently wide range of real-world constraints. Beyond our analysis of current schemes, our framework provides an evaluation methodology and benchmark for future web authentication proposals.",
"Proposed in response to the growing number of passwords users have to memorize, password managers allow to store one's credentials, either on a third-party server (online password manager), or on a portable device (portable password manager) such as a mobile phone or a USB key. In this paper, we present a comparative usability study of three popular password managers: an online manager (LastPass), a phone manager (KeePassMobile) and a USB manager (Roboform2Go). Our study provides valuable insights on average users' perception of security and usability of the three password management approaches. We find, contrary to our intuition, that users overall prefer the two portable managers over the online manager, despite the better usability of the latter. Also, surprisingly, our non-technical pool of users shows a strong inclination towards the phone manager. These findings can generally be credited to the fact that the users were not comfortable giving control of their passwords to an online entity and preferred to manage their passwords themselves on their own portable devices. Our results prompt the need for research on developing user-friendly and secure phone managers, owing to the ubiquity of mobile phones."
]
}
|
1309.5344
|
2951751593
|
Two-factor authentication (2F) aims to enhance resilience of password-based authentication by requiring users to provide an additional authentication factor, e.g., a code generated by a security token. However, it also introduces non-negligible costs for service providers and requires users to carry out additional actions during the authentication process. In this paper, we present an exploratory comparative study of the usability of 2F technologies. First, we conduct a pre-study interview to identify popular technologies as well as contexts and motivations in which they are used. We then present the results of a quantitative study based on a survey completed by 219 Mechanical Turk users, aiming to measure the usability of three popular 2F solutions: codes generated by security tokens, one-time PINs received via email or SMS, and dedicated smartphone apps (e.g., Google Authenticator). We record contexts and motivations, and study their impact on perceived usability. We find that 2F technologies are overall perceived as usable, regardless of motivation and or context of use. We also present an exploratory factor analysis, highlighting that three metrics -- ease-of-use, required cognitive efforts, and trustworthiness -- are enough to capture key factors affecting 2F usability.
|
Previous work has suggested that security via 2F decreases usability of end-user authentication. For instance, @cite_4 showed that 2F increases redundancy'', thus augmenting security but decreasing usability. Along similar lines, @cite_13 analyzed the effects of implementing 2F on productivity, focusing on the Common Access Card'' (CaC), a combined smart card photo ID card used (at that time) by US Department of Defense (DoD) employees. They reported that users stopped checking emails at home (due to the unavailability of card readers) and that many employees accidentally left their card in the reader. Authors also estimated that the DoD spent about @math 62 @math 22 @math 30 @math 141 @math 30$ proposed questions. The authors concluded that familiarity with a technology (rather than perceived usability) impacted user willingness to use a given authentication technology. Their results showed that users perceived the 1-factor method (with which the average user had most experience) as being the most secure and most convenient option.
|
{
"cite_N": [
"@cite_13",
"@cite_4"
],
"mid": [
"2121371744",
"2072694646"
],
"abstract": [
"The Department of Defense has mandated the use of a two-factor security system for access and authentication. The increased security of such a system has been extensively researched by the military. This research uses a survey to examine the effects on productivity and usability of implementing such a system.",
"The usability of security systems has become a major issue in research on the efficiency and user acceptance of security systems. The authentication process is essential for controlling the access to various resources and facilities. The design of usable yet secure user authentication methods raises crucial questions concerning how to solve conflicts between security and usability goals."
]
}
|
1309.4628
|
2952035598
|
Learning word representations has recently seen much success in computational linguistics. However, assuming sequences of word tokens as input to linguistic analysis is often unjustified. For many languages word segmentation is a non-trivial task and naturally occurring text is sometimes a mixture of natural language strings and other character data. We propose to learn text representations directly from raw character sequences by training a Simple recurrent Network to predict the next character in text. The network uses its hidden layer to evolve abstract representations of the character sequences it sees. To demonstrate the usefulness of the learned text embeddings, we use them as features in a supervised character level text segmentation and labeling task: recognizing spans of text containing programming language code. By using the embeddings as features we are able to substantially improve over a baseline which uses only surface character n-grams.
|
We also do now know of any work on learning to detect and label code segments in raw text. However, @cite_0 describe a system called infoZilla which uses hand-written rules to extract source code fragments, stack traces, patches and enumerations from bug reports. In contrast, here we leverage the Stackoverflow dataset to learn how to perform a similar task automatically.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2121972257"
],
"abstract": [
"In software engineering experiments, the description of bug reports is typically treated as natural language text, although it often contains stack traces, source code, and patches. Neglecting such structural elements is a loss of valuable information; structure usually leads to a better performance of machine learning approaches. In this paper, we present a tool called infoZilla that detects structural elements from bug reports with near perfect accuracy and allows us to extract them. We anticipate that infoZilla can be used to leverage data from bug reports at a different granularity level that can facilitate interesting research in the future."
]
}
|
1309.4405
|
1620816389
|
We study approximation algorithms for several variants of the MaxCover problem, with the focus on algorithms that run in FPT time. In the MaxCover problem we are given a set N of elements, a family S of subsets of N, and an integer K. The goal is to find up to K sets from S that jointly cover (i.e., include) as many elements as possible. This problem is well-known to be NP-hard and, under standard complexitytheoretic assumptions, the best possible polynomial-time approximation algorithm has approximation ratio (1 − 1 ). We first consider a variant of MaxCover with bounded element frequencies, i.e., a variant where there is a constant p such that each element belongs to at most p sets in S. For this case we show that there is an FPT approximation scheme (i.e., for each β there is a β-approximation algorithm running in FPT time) for the problem of maximizing the number of covered elements, and a randomized FPT approximation scheme for the problem of minimizing the number of elements left uncovered (we take K to be the parameter). Then, for the case where there is a constant p such that each element belongs to at least p sets from S, we show that the standard greedy approximation algorithm achieves approximation ratio exactly 1 − e −max(pK kSk,1) . We conclude by considering an unrestricted variant of MaxCover, and show approximation algorithms that run in exponential time and combine an exact algorithm with a greedy approximation. Some of our results improve currently known results for MaxVertexCover.
|
As far as we know, the best polynomial-time approximation algorithm for MaxVertexCover is due to Ageev and Sviridenko @cite_11 , and achieves approximation ratio of @math . However, in various settings, it is possible to achieve better results; we mention the papers of @cite_18 and of Galluccio and Nobili @cite_2 as examples.
|
{
"cite_N": [
"@cite_18",
"@cite_2",
"@cite_11"
],
"mid": [
"2004973717",
"2073925702",
"1512148653"
],
"abstract": [
"Abstract We consider the max-vertex-cover (MVC) problem, i.e., find k vertices from an undirected and edge-weighted graph G =( V , E ), where | V |= n ⩾ k , such that the total edge weight covered by the k vertices is maximized. There is a 3 4-approximation algorithm for MVC, based on a linear programming relaxation. We show that the guaranteed ratio can be improved by a simple greedy algorithm for k >(3 4) n , and a simple randomized algorithm for k >(1 2) n . Furthermore, we study a semidefinite programming (SDP) relaxation based approximation algorithms for MVC. We show that, for a range of k , our SDP-based algorithm achieves the best performance guarantee among the four types of algorithms mentioned in this paper.",
"We provide a new LP relaxation of the maximum vertex cover problem and a polynomial-time algorithm that finds a solution within the approximation factor 1-1 (2q@?), where q@? is the size of the smallest clique in a given clique-partition of the edge weighting of G.",
"In this paper we demonstrate a general method of designing constant-factor approximation algorithms for some discrete optimization problems with cardinality constraints. The core of the method is a simple deterministic (\"pipage\") procedure of rounding of linear relaxations. By using the method we design a (1-(1-1 k)k)-approximation algorithm for the maximum coverage problem where k is the maximum size of the subsets that are covered, and a 1 2-approximation algorithm for the maximum cut problem with given sizes of parts in the vertex set bipartition. The performance guarantee of the former improves on that of the well-known (1 - e-1)-greedy algorithm due to Cornuejols, Fisher and Nemhauser in each case of bounded k. The latter is, to the best of our knowledge, the first constant-factor algorithm for that version of the maximum cut problem."
]
}
|
1309.4405
|
1620816389
|
We study approximation algorithms for several variants of the MaxCover problem, with the focus on algorithms that run in FPT time. In the MaxCover problem we are given a set N of elements, a family S of subsets of N, and an integer K. The goal is to find up to K sets from S that jointly cover (i.e., include) as many elements as possible. This problem is well-known to be NP-hard and, under standard complexitytheoretic assumptions, the best possible polynomial-time approximation algorithm has approximation ratio (1 − 1 ). We first consider a variant of MaxCover with bounded element frequencies, i.e., a variant where there is a constant p such that each element belongs to at most p sets in S. For this case we show that there is an FPT approximation scheme (i.e., for each β there is a β-approximation algorithm running in FPT time) for the problem of maximizing the number of covered elements, and a randomized FPT approximation scheme for the problem of minimizing the number of elements left uncovered (we take K to be the parameter). Then, for the case where there is a constant p such that each element belongs to at least p sets from S, we show that the standard greedy approximation algorithm achieves approximation ratio exactly 1 − e −max(pK kSk,1) . We conclude by considering an unrestricted variant of MaxCover, and show approximation algorithms that run in exponential time and combine an exact algorithm with a greedy approximation. Some of our results improve currently known results for MaxVertexCover.
|
From the point of view of parametrized complexity, MaxVertexCover was first considered by @cite_10 , who have shown that it is @math -complete. The problem was also studied by Cai @cite_13 who gave the currently best exact algorithm for it and by Marx, who gave an FPT approximation scheme @cite_20 . There is also an FPT algorithm for MaxCover , for parameter @math , i.e., the number of elements to cover, due to Bl "a ser @cite_12 .
|
{
"cite_N": [
"@cite_13",
"@cite_10",
"@cite_12",
"@cite_20"
],
"mid": [
"1994262057",
"2156896570",
"2073689249",
"1980809300"
],
"abstract": [
"We study the parameterized complexity of cardinality constrained optimization problems, i.e. optimization problems that require their solutions to contain specified numbers of elements to optimize solution values. For this purpose, we consider around 20 such optimization problems, as well as their parametric duals, that deal with various fundamental relations among vertices and edges in graphs. We have almost completely settled their parameterized complexity by giving either FPT algorithms or W[1]-hardness proofs. Furthermore, we obtain faster exact algorithms for several cardinality constrained optimization problems by transforming them into problems of finding maximum (minimum) weight triangles in weighted graphs.",
"Important variants of theVERTEX COVER problem (among others, CONNECTED VERTEX COVER, CAPACITATED VERTEX COVER, and MAXIMUM PARTIAL VERTEX COVER) have been intensively studied in terms of polynomial-time approximability. By way of contrast, their parameterized complexity has so far been completely open. We close this gap here by showing that, with the size of the desired vertex cover as the parameter, CONNECTED VERTEX COVER and CAPACITATED VERTEX COVER are both fixed-parameter tractable while MAXIMUM PARTIAL VERTEX COVER is W[1]-complete. This answers two open questions from the literature. The results extend to several closely related problems. Interestingly, although the considered variants of VERTEX COVER behave very similar in terms of constant factor approximability, they display a wide range of different characteristics when investigating their parameterized complexities.",
"We study the generalization of covering problems such as the set cover problem to partial covering problems. Here we only want to cover a given number k of elements rather than all elements. For instance, in the k-partial (weighted) set cover problem, we wish to compute a minimum weight collection of sets that covers at least k elements. As a main result, we show that the k-partial set cover problem and its special cases like the k-partial vertex cover problem are all fixed parameter tractable (with parameter k). As a second example, we consider the minimum weight k-partial t-restricted cycle cover problem.",
"Approximation algorithms and parameterized complexity are usually considered to be two separate ways of dealing with hard algorithmic problems. In this paper, our aim is to investigate how these two fields can be combined to achieve better algorithms than what any of the two theories could offer. We discuss the different ways parameterized complexity can be extended to approximation algorithms, survey results of this type and propose directions for future research."
]
}
|
1309.4405
|
1620816389
|
We study approximation algorithms for several variants of the MaxCover problem, with the focus on algorithms that run in FPT time. In the MaxCover problem we are given a set N of elements, a family S of subsets of N, and an integer K. The goal is to find up to K sets from S that jointly cover (i.e., include) as many elements as possible. This problem is well-known to be NP-hard and, under standard complexitytheoretic assumptions, the best possible polynomial-time approximation algorithm has approximation ratio (1 − 1 ). We first consider a variant of MaxCover with bounded element frequencies, i.e., a variant where there is a constant p such that each element belongs to at most p sets in S. For this case we show that there is an FPT approximation scheme (i.e., for each β there is a β-approximation algorithm running in FPT time) for the problem of maximizing the number of covered elements, and a randomized FPT approximation scheme for the problem of minimizing the number of elements left uncovered (we take K to be the parameter). Then, for the case where there is a constant p such that each element belongs to at least p sets from S, we show that the standard greedy approximation algorithm achieves approximation ratio exactly 1 − e −max(pK kSk,1) . We conclude by considering an unrestricted variant of MaxCover, and show approximation algorithms that run in exponential time and combine an exact algorithm with a greedy approximation. Some of our results improve currently known results for MaxVertexCover.
|
In our paper, we attempt to merge parametrized study of MaxCover with its study from the point of view of approximation algorithms. In that respect, our work is very close in spirit that of Croce and Paschos @cite_3 , who provide moderately exponential approximation algorithms for MaxVertexCover, and to the work of Marx @cite_20 . Compared to their results, we consider a more general problem, MaxCover (with or without bounded frequencies) and, as far as it is possible, we seek algorithms that run in FPT time (the algorithm of Croce and Paschos is not FPT). Interestingly, even though we focus on a more general problem, our algorithms improve upon the results of Croce and Paschos @cite_3 and of Marx @cite_20 , even when applied to MaxVertexCover .
|
{
"cite_N": [
"@cite_20",
"@cite_3"
],
"mid": [
"1980809300",
"1597165385"
],
"abstract": [
"Approximation algorithms and parameterized complexity are usually considered to be two separate ways of dealing with hard algorithmic problems. In this paper, our aim is to investigate how these two fields can be combined to achieve better algorithms than what any of the two theories could offer. We discuss the different ways parameterized complexity can be extended to approximation algorithms, survey results of this type and propose directions for future research.",
"We first devise moderately exponential exact algorithms for maxk-vertex cover, with time-complexity exponential in n but with polynomial space-complexity by developing a branch and reduce method based upon the measure-and-conquer technique. We then prove that, there exists an exact algorithm for maxk-vertex cover with complexity bounded above by the maximum among ck and γτ, for some γ<2, where τ is the cardinality of a minimum vertex cover of G (note that @math k @math with respect to parameter k unless @math ), using polynomial space. We finally study approximation of maxk-vertex cover by moderately exponential algorithms. The general goal of the issue of moderately exponential approximation is to catch-up on polynomial inapproximability, by providing algorithms achieving, with worst-case running times importantly smaller than those needed for exact computation, approximation ratios unachievable in polynomial time."
]
}
|
1309.4496
|
1746795157
|
Reliable statistical information is important to make political decisions on a sound basis and to help measure the impact of policies. Unfortunately, statistics offices in developing countries have scarce resources and statistical censuses are therefore conducted sporadically. Based on mobile phone communications and history of airtime credit purchases, we estimate the relative income of individuals, the diversity and inequality of income, and an indicator for socioeconomic segregation for fine-grained regions of an African country. Our study shows how to use mobile phone datasets as a starting point to understand the socio-economic state of a country, which can be especially useful in countries with few resources to conduct large surveys.
|
@cite_17 and Frias- @cite_7 have focused on mobility variables. They looked at many of them, and reported the following top features : the number of cell tower used per week, the maximum distance traveled in a week and the weekly radius of gyration (i.e. the typical range of a user trajectory @cite_1 ). Both were able to predict socio-economic classes of neighborhoods in their city of study.
|
{
"cite_N": [
"@cite_1",
"@cite_7",
"@cite_17"
],
"mid": [
"1982300822",
"",
"2087184820"
],
"abstract": [
"This study used a sample of 100,000 mobile phone users whose trajectory was tracked for six months to study human mobility patterns. Displacements across all users suggest behaviour close to the Levy-flight-like pattern observed previously based on the motion of marked dollar bills, but with a cutoff in the distribution. The origin of the Levy patterns observed in the aggregate data appears to be population heterogeneity and not Levy patterns at the level of the individual.",
"",
"The socioeconomic status of a population or an individual provides an understanding of its access to housing, education, health or basic services like water and electricity. In itself, it is also an indirect indicator of the purchasing power and as such a key element when personalizing the interaction with a customer, especially for marketing campaigns or offers of new products. In this paper we study if the information derived from the aggregated use of cell phone records can be used to identify the socioeconomic levels of a population. We present predictive models constructed with SVMs and Random Forests that use the aggregated behavioral variables of the communication antennas to predict socioeconomic levels. Our results show correct prediction rates of over 80 for an urban population of around 500,000 citizens."
]
}
|
1309.3877
|
1502303435
|
Most metric learning algorithms, as well as Fisher's Discriminant Analysis (FDA), optimize some cost function of different measures of within-and between-class distances. On the other hand, Support Vector Machines(SVMs) and several Multiple Kernel Learning (MKL) algorithms are based on the SVM large margin theory. Recently, SVMs have been analyzed from SVM and metric learning, and to develop new algorithms that build on the strengths of each. Inspired by the metric learning interpretation of SVM, we develop here a new metric-learning based SVM framework in which we incorporate metric learning concepts within SVM. We extend the optimization problem of SVM to include some measure of the within-class distance and along the way we develop a new within-class distance measure which is appropriate for SVM. In addition, we adopt the same approach for MKL and show that it can be also formulated as a Mahalanobis metric learning problem. Our end result is a number of SVM MKL algorithms that incorporate metric learning concepts. We experiment with them on a set of benchmark datasets and observe important predictive performance improvements.
|
@cite_8 recently show can be formulated as a Mahalanobis metric learning problem in which the transformation matrix is diagonal @math , @math . In the metric learning jargon learns a diagonal linear transformation @math and a translation @math which maximize the margin and place the two classes symmetrically in the two different sides of the hyperplane @math . In the standard view of , the space is fixed and the hyperplane is moved around to achieve the optimal margin. In the metric view of , the hyperplane is fixed to @math and the space is scaled, @math , and then translated, @math , so that the instances are placed optimally around @math @cite_8 .
|
{
"cite_N": [
"@cite_8"
],
"mid": [
"1826077941"
],
"abstract": [
"Support Vector Machines, SVMs, and the Large Margin Nearest Neighbor algorithm, LMNN, are two very popular learning algorithms with quite different learning biases. In this paper we bring them into a unified view and show that they have a much stronger relation than what is commonly thought. We analyze SVMs from a metric learning perspective and cast them as a metric learning problem, a view which helps us uncover the relations of the two algorithms. We show that LMNN can be seen as learning a set of local SVM-like models in a quadratic space. Along the way and inspired by the metric-based interpretation of SVM s we derive a novel variant of SVMs, epsilon-SVM, to which LMNN is even more similar. We give a unified view of LMNN and the different SVM variants. Finally we provide some preliminary experiments on a number of benchmark datasets in which show that epsilon-SVM compares favorably both with respect to LMNN and SVM."
]
}
|
1309.3877
|
1502303435
|
Most metric learning algorithms, as well as Fisher's Discriminant Analysis (FDA), optimize some cost function of different measures of within-and between-class distances. On the other hand, Support Vector Machines(SVMs) and several Multiple Kernel Learning (MKL) algorithms are based on the SVM large margin theory. Recently, SVMs have been analyzed from SVM and metric learning, and to develop new algorithms that build on the strengths of each. Inspired by the metric learning interpretation of SVM, we develop here a new metric-learning based SVM framework in which we incorporate metric learning concepts within SVM. We extend the optimization problem of SVM to include some measure of the within-class distance and along the way we develop a new within-class distance measure which is appropriate for SVM. In addition, we adopt the same approach for MKL and show that it can be also formulated as a Mahalanobis metric learning problem. Our end result is a number of SVM MKL algorithms that incorporate metric learning concepts. We experiment with them on a set of benchmark datasets and observe important predictive performance improvements.
|
@cite_8 proposed a measure of the within-class distance for . This measure is inspired by the relation, developed in that paper, between and ---Large Margin Nearest Neighbor @cite_19 ---a popular metric learning algorithm. It is defined as the sum of the distances of the instances from the margin hyperplane and for the class @math , it is given by: @math . The authors then proposed an variant, called , which optimizes the margin and the above within-class distance measure, essentially combining both the and the learning biases. As we will see below turns out to be a special case of the which we will describe in section . The optimization problem of is: which is equivalent to: where @math is the distance of the @math th instance from its margin hyperplane, and @math are the slack variables which allow for the soft margin. This problem is convex and can be kernelized directly as a standard .
|
{
"cite_N": [
"@cite_19",
"@cite_8"
],
"mid": [
"2106053110",
"1826077941"
],
"abstract": [
"The accuracy of k-nearest neighbor (kNN) classification depends significantly on the metric used to compute distances between different examples. In this paper, we show how to learn a Mahalanobis distance metric for kNN classification from labeled examples. The Mahalanobis metric can equivalently be viewed as a global linear transformation of the input space that precedes kNN classification using Euclidean distances. In our approach, the metric is trained with the goal that the k-nearest neighbors always belong to the same class while examples from different classes are separated by a large margin. As in support vector machines (SVMs), the margin criterion leads to a convex optimization based on the hinge loss. Unlike learning in SVMs, however, our approach requires no modification or extension for problems in multiway (as opposed to binary) classification. In our framework, the Mahalanobis distance metric is obtained as the solution to a semidefinite program. On several data sets of varying size and difficulty, we find that metrics trained in this way lead to significant improvements in kNN classification. Sometimes these results can be further improved by clustering the training examples and learning an individual metric within each cluster. We show how to learn and combine these local metrics in a globally integrated manner.",
"Support Vector Machines, SVMs, and the Large Margin Nearest Neighbor algorithm, LMNN, are two very popular learning algorithms with quite different learning biases. In this paper we bring them into a unified view and show that they have a much stronger relation than what is commonly thought. We analyze SVMs from a metric learning perspective and cast them as a metric learning problem, a view which helps us uncover the relations of the two algorithms. We show that LMNN can be seen as learning a set of local SVM-like models in a quadratic space. Along the way and inspired by the metric-based interpretation of SVM s we derive a novel variant of SVMs, epsilon-SVM, to which LMNN is even more similar. We give a unified view of LMNN and the different SVM variants. Finally we provide some preliminary experiments on a number of benchmark datasets in which show that epsilon-SVM compares favorably both with respect to LMNN and SVM."
]
}
|
1309.3877
|
1502303435
|
Most metric learning algorithms, as well as Fisher's Discriminant Analysis (FDA), optimize some cost function of different measures of within-and between-class distances. On the other hand, Support Vector Machines(SVMs) and several Multiple Kernel Learning (MKL) algorithms are based on the SVM large margin theory. Recently, SVMs have been analyzed from SVM and metric learning, and to develop new algorithms that build on the strengths of each. Inspired by the metric learning interpretation of SVM, we develop here a new metric-learning based SVM framework in which we incorporate metric learning concepts within SVM. We extend the optimization problem of SVM to include some measure of the within-class distance and along the way we develop a new within-class distance measure which is appropriate for SVM. In addition, we adopt the same approach for MKL and show that it can be also formulated as a Mahalanobis metric learning problem. Our end result is a number of SVM MKL algorithms that incorporate metric learning concepts. We experiment with them on a set of benchmark datasets and observe important predictive performance improvements.
|
@cite_1 proposed to maximize the margin and to constrain the outputs of , their optimization problem thus optimizes the margin and some measure of the data spread. This approach also falls within our general metric-learning-based framework that we will present right away.
|
{
"cite_N": [
"@cite_1"
],
"mid": [
"2118713255"
],
"abstract": [
"Leading classification methods such as support vector machines (SVMs) and their counterparts achieve strong generalization performance by maximizing the margin of separation between data classes. While the maximum margin approach has achieved promising performance, this article identifies its sensitivity to affine transformations of the data and to directions with large data spread. Maximum margin solutions may be misled by the spread of data and preferentially separate classes along large spread directions. This article corrects these weaknesses by measuring margin not in the absolute sense but rather only relative to the spread of data in any projection direction. Maximum relative margin corresponds to a data-dependent regularization on the classification function while maximum absolute margin corresponds to an l2 norm constraint on the classification function. Interestingly, the proposed improvements only require simple extensions to existing maximum margin formulations and preserve the computational efficiency of SVMs. Through the maximization of relative margin, surprising performance gains are achieved on real-world problems such as digit, text classification and on several other benchmark data sets. In addition, risk bounds are derived for the new formulation based on Rademacher averages."
]
}
|
1309.3809
|
1779321007
|
We consider the problem of naming objects in complex, natural scenes containing widely varying object appearance and subtly different names. Informed by cognitive research, we propose an approach based on sharing context based object hypotheses between visual and lexical spaces. To this end, we present the Visual Semantic Integration Model (VSIM) that represents object labels as entities shared between semantic and visual contexts and infers a new image by updating labels through context switching. At the core of VSIM is a semantic Pachinko Allocation Model and a visual nearest neighbor Latent Dirichlet Allocation Model. For inference, we derive an iterative Data Augmentation algorithm that pools the label probabilities and maximizes the joint label posterior of an image. Our model surpasses the performance of state-of-art methods in several visual tasks on the challenging SUN09 dataset.
|
is gaining importance in large scale learning of web images. One strand of research is aimed at generating natural language sentences from objects and their inter-relations @cite_0 . Our problem is related to joint image and word sense discrimination encountered in image retrieval tasks. These works have analyzed polysemy in images returned from keyword searches, in terms of visual senses of keywords. However the ambiguity in these tasks lies mostly in the visual domain since keywords are usually static, sparse and well-defined. Hence, the between keywords and images is either abstracted through a single latent sense @cite_13 , picked up from knowledge sources e.g., wikipedia or the image and text words are jointly modeled through a single latent variable @cite_6 . As shown in the results, these simple correlations are not effective in mapping the rich interactions between semantic and visual space.
|
{
"cite_N": [
"@cite_0",
"@cite_13",
"@cite_6"
],
"mid": [
"2066134726",
"1599287875",
"2020842694"
],
"abstract": [
"We posit that visually descriptive language offers computer vision researchers both information about the world, and information about how people describe the world. The potential benefit from this source is made more significant due to the enormous amount of language data easily available today. We present a system to automatically generate natural language descriptions from images that exploits both statistics gleaned from parsing large quantities of text data and recognition algorithms from computer vision. The system is very effective at producing relevant sentences for images. It also generates descriptions that are notably more true to the specific image content than previous work.",
"We study the task of learning to rank images given a text query, a problem that is complicated by the issue of multiple senses. That is, the senses of interest are typically the visually distinct concepts that a user wishes to retrieve. In this paper, we propose to learn a ranking function that optimizes the ranking cost of interest and simultaneously discovers the disambiguated senses of the query that are optimal for the supervised task. Note that no supervised information is given about the senses. Experiments performed on web images and the ImageNet dataset show that using our approach leads to a clear gain in performance.",
"We consider the problem of modeling annotated data---data with multiple types where the instance of one type (such as a caption) serves as a description of the other type (such as an image). We describe three hierarchical probabilistic mixture models which aim to describe such data, culminating in correspondence latent Dirichlet allocation, a latent variable model that is effective at modeling the joint distribution of both types and the conditional distribution of the annotation given the primary type. We conduct experiments on the Corel database of images and captions, assessing performance in terms of held-out likelihood, automatic annotation, and text-based image retrieval."
]
}
|
1309.3809
|
1779321007
|
We consider the problem of naming objects in complex, natural scenes containing widely varying object appearance and subtly different names. Informed by cognitive research, we propose an approach based on sharing context based object hypotheses between visual and lexical spaces. To this end, we present the Visual Semantic Integration Model (VSIM) that represents object labels as entities shared between semantic and visual contexts and infers a new image by updating labels through context switching. At the core of VSIM is a semantic Pachinko Allocation Model and a visual nearest neighbor Latent Dirichlet Allocation Model. For inference, we derive an iterative Data Augmentation algorithm that pools the label probabilities and maximizes the joint label posterior of an image. Our model surpasses the performance of state-of-art methods in several visual tasks on the challenging SUN09 dataset.
|
provide a nice framework for scene understanding due to the modular separation of concepts at different granularities. Mostly, previous work has used semantic networks as filters to remove incompatible object detections in the scene @cite_7 @cite_11 . A visual hierarchy of object classes is proposed in @cite_1 . Our work is related to the topic modeling algorithms for scene understanding @cite_2 @cite_8 @cite_6 @cite_12 . However, these models try to capture overlapping information between images and text to reinforce each other. In contrast, our method captures the complementary information in these contexts and exploits them to improve the quality of the inferred labels. To the best of our knowledge, no previous work has considered such joint inference framework across dichotomous information spaces.
|
{
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_1",
"@cite_6",
"@cite_2",
"@cite_12",
"@cite_11"
],
"mid": [
"1982522767",
"2129004009",
"2025862220",
"2020842694",
"2033012377",
"2106624428",
""
],
"abstract": [
"There has been a growing interest in exploiting contextual information in addition to local features to detect and localize multiple object categories in an image. Context models can efficiently rule out some unlikely combinations or locations of objects and guide detectors to produce a semantically coherent interpretation of a scene. However, the performance benefit from using context models has been limited because most of these methods were tested on datasets with only a few object categories, in which most images contain only one or two object categories. In this paper, we introduce a new dataset with images that contain many instances of different object categories and propose an efficient model that captures the contextual information among more than a hundred of object categories. We show that our context model can be applied to scene understanding tasks that local detectors alone cannot solve.",
"We present a novel generative model for simultaneously recognizing and segmenting object and scene classes. Our model is inspired by the traditional bag of words representation of texts and images as well as a number of related generative models, including probabilistic latent semantic analysis (pLSA) and latent Dirichlet allocation (LDA). A major drawback of the pLSA and LDA models is the assumption that each patch in the image is independently generated given its corresponding latent topic. While such representation provides an efficient computational method, it lacks the power to describe the visually coherent images and scenes. Instead, we propose a spatially coherent latent topic model (spatial-LTM). Spatial-LTM represents an image containing objects in a hierarchical way by over-segmented image regions of homogeneous appearances and the salient image patches within the regions. Only one single latent topic is assigned to the image patches within each region, enforcing the spatial coherency of the model. This idea gives rise to the following merits of spatial-LTM: (1) spatial-LTM provides a unified representation for spatially coherent bag of words topic models; (2) spatial-LTM can simultaneously segment and classify objects, even in the case of occlusion and multiple instances; and (3) spatial-LTM can be trained either unsupervised or supervised, as well as when partial object labels are provided. We verify the success of our model in a number of segmentation and classification experiments.",
"In the real visual world, the number of categories a classifier needs to discriminate is on the order of hundreds or thousands. For example, the SUN dataset [24] contains 899 scene categories and ImageNet [6] has 15,589 synsets. Designing a multiclass classifier that is both accurate and fast at test time is an extremely important problem in both machine learning and computer vision communities. To achieve a good trade-off between accuracy and speed, we adopt the relaxed hierarchy structure from [15], where a set of binary classifiers are organized in a tree or DAG (directed acyclic graph) structure. At each node, classes are colored into positive and negative groups which are separated by a binary classifier while a subset of confusing classes is ignored. We color the classes and learn the induced binary classifier simultaneously using a unified and principled max-margin optimization. We provide an analysis on generalization error to justify our design. Our method has been tested on both Caltech-256 (object recognition) [9] and the SUN dataset (scene classification) [24], and shows significant improvement over existing methods.",
"We consider the problem of modeling annotated data---data with multiple types where the instance of one type (such as a caption) serves as a description of the other type (such as an image). We describe three hierarchical probabilistic mixture models which aim to describe such data, culminating in correspondence latent Dirichlet allocation, a latent variable model that is effective at modeling the joint distribution of both types and the conditional distribution of the annotation given the primary type. We conduct experiments on the Corel database of images and captions, assessing performance in terms of held-out likelihood, automatic annotation, and text-based image retrieval.",
"We develop hierarchical, probabilistic models for objects, the parts composing them, and the visual scenes surrounding them. Our approach couples topic models originally developed for text analysis with spatial transformations, and thus consistently accounts for geometric constraints. By building integrated scene models, we may discover contextual relationships, and better exploit partially labeled training images. We first consider images of isolated objects, and show that sharing parts among object categories improves detection accuracy when learning from few examples. Turning to multiple object scenes, we propose nonparametric models which use Dirichlet processes to automatically learn the number of parts underlying each object category, and objects composing each scene. The resulting transformed Dirichlet process (TDP) leads to Monte Carlo algorithms which simultaneously segment and recognize objects in street and office scenes.",
"Given an image, we propose a hierarchical generative model that classifies the overall scene, recognizes and segments each object component, as well as annotates the image with a list of tags. To our knowledge, this is the first model that performs all three tasks in one coherent framework. For instance, a scene of a polo game' consists of several visual objects such as human', horse', grass', etc. In addition, it can be further annotated with a list of more abstract (e.g. dusk') or visually less salient (e.g. saddle') tags. Our generative model jointly explains images through a visual model and a textual model. Visually relevant objects are represented by regions and patches, while visually irrelevant textual annotations are influenced directly by the overall scene class. We propose a fully automatic learning framework that is able to learn robust scene models from noisy Web data such as images and user tags from Flickr.com. We demonstrate the effectiveness of our framework by automatically classifying, annotating and segmenting images from eight classes depicting sport scenes. In all three tasks, our model significantly outperforms state-of-the-art algorithms.",
""
]
}
|
1309.4009
|
2949275180
|
Although user access patterns on the live web are well-understood, there has been no corresponding study of how users, both humans and robots, access web archives. Based on samples from the Internet Archive's public Wayback Machine, we propose a set of basic usage patterns: Dip (a single access), Slide (the same page at different archive times), Dive (different pages at approximately the same archive time), and Skim (lists of what pages are archived, i.e., TimeMaps). Robots are limited almost exclusively to Dips and Skims, but human accesses are more varied between all four types. Robots outnumber humans 10:1 in terms of sessions, 5:4 in terms of raw HTTP accesses, and 4:1 in terms of megabytes transferred. Robots almost always access TimeMaps (95 of accesses), but humans predominately access the archived web pages themselves (82 of accesses). In terms of unique archived web pages, there is no overall preference for a particular time, but the recent past (within the last year) shows significant repeat accesses.
|
Despite the significance of web archives in preserving web heritage, the aspect of web archive usage has been overlooked. The only previous related work is a study of the search behavior characterization for web archives @cite_11 . We highlight this work, but first we define the terms for our discussion.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2404553285"
],
"abstract": [
"Web archives are a huge source of information to mine the past. However, tools to explore web archives are still in their infancy, in part due to the reduced knowledge that we have of their users. We contribute to this knowledge by presenting the first search behavior characterization of web archive users. We obtained detailed statistics about the users’ sessions, queries, terms and clicks from the analysis of their search logs. The results show that users did not spend much time and effort searching the past. They prefer short sessions, composed of short queries and few clicks. Full-text search is preferred to URL search, but both are frequently used. There is a strong evidence that users prefer the oldest documents over the newest, but mostly search without any temporal restriction. We discuss all these findings and their implications on the design of future web archives."
]
}
|
1309.3716
|
1607297408
|
In this paper we study a transmission power tune problem with densely deployed 802.11 Wireless Local Area Networks (WLANs). While previous papers emphasize on tuning transmission power with either PHY or MAC layer separately, optimally setting each Access Point's (AP's) transmission power of a densely deployed 802.11 network considering its dual effects on both layers remains an open problem. In this work, we design a measure by evaluating impacts of transmission power on network performance on both PHY and MAC layers. We show that such an optimization problem is intractable and then we investigate and develop an analytical framework to allow simple yet efficient solutions. Through simulations and numerical results, we observe clear benefits of the dual-effect model compared to solutions optimizing solely on a single layer; therefore, we conclude that tuning transmission power from a dual layer (PHY-MAC) point of view is essential and necessary for dense WLANs. We further form a game theoretical framework and investigate above power-tune problem in a strategic network. We show that with decentralized and strategic users, the Nash Equilibrium (N.E.) of the corresponding game is in-efficient and thereafter we propose a punishment based mechanism to enforce users to adopt the social optimal strategy profile under both perfect and imperfect sensing environments.
|
There have been many classical PHY layer power-tune studies using Shannon's capacity formula. For example, investigated a transmit power and carrier sensing threshold tuning problem for improving spatial reuse in @cite_3 . looked into the transmit power control problem through management of interference, energy and connectivity in @cite_9 . In @cite_14 , investigate distributed power control problem on physical layer; a distributed algorithm is given and critical performance criteria, such as convergence are analyzed. In @cite_4 , analyze several multi-user spectrum management problems with focus on power control.
|
{
"cite_N": [
"@cite_9",
"@cite_14",
"@cite_4",
"@cite_3"
],
"mid": [
"2056908709",
"2197034542",
"2149866915",
"1970103089"
],
"abstract": [
"Transmit power in wireless cellular networks is a key degree of freedom in the management of interference, energy, and connectivity. Power control in both the uplink and downlink of a cellular network has been extensively studied, especially over the last 15 years, and some of the results have enabled the continuous evolution and significant impact of the digital cellular technology. This survey provides a comprehensive discussion of the models, algorithms, analysis, and methodologies in this vast and growing literature. It starts with a taxonomy of the wide range of power control problem formulations, and progresses from the basic formulation to more sophisticated ones. When transmit power is the only set of optimization variables, algorithms for fixed SIR are presented first, before turning to their robust versions and joint SIR and power optimization. This is followed by opportunistic and non-cooperative power control. Then joint control of power together with beamforming pattern, base station assignment, spectrum allocation, and transmit schedule is surveyed one-by-one. Throughout the survey, we highlight the use of mathematical language and tools in the study of power control, including optimization theory, control theory, game theory, and linear algebra. Practical implementations of some of the algorithms in operational networks are discussed in the concluding section. As illustrated by the open problems presented at the end of most chapters, in the area of power control in cellular networks, there are still many under-explored directions and unresolved issues that remain theoretically challenging and practically important..",
"",
"Spectrum management is used to improve performance in multiuser communication system, e.g., cognitive radio or femtocell networks, where multiuser interference can lead to data rate degradation. We study the nonconvex NP-hard problem of maximizing a weighted sum rate in a multiuser Gaussian interference channel by power control subject to affine power constraints. By exploiting the fact that this problem can be restated as an optimization problem with constraints that are spectral radii of specially crafted nonnegative matrices, we derive necessary and sufficient optimality conditions and propose a global optimization algorithm based on the outer approximation method. Central to our techniques is the use of nonnegative matrix theory, e.g., nonnegative matrix inequalities and the Perron-Frobenius theorem. We also study an inner approximation method and a relaxation method that give insights to special cases. Our techniques and algorithm can be extended to a multiple carrier system model, e.g., OFDM system or receivers with interference suppression capability.",
"The importance of spatial reuse in wireless ad-hoc networks has been long recognized as a key to improving the network capacity. One can increase the level of spatial reuse by either reducing the transmit power or increasing the carrier sense threshold (thereby reducing the carrier sense range). On the other hand, as the transmit power decreases or the carrier sense threshold increases, the SINR decreases as a result of the smaller received signal or the increased interference level. Consequently, the data rate sustained by each transmission may decrease. This leads naturally to the following questions:(1)How can the trade-off between the increased level of spatial reuse and the decreased data rate each node can sustain be quantified? In other words,is there an optimal range of transmit power carrier sense threshold in which the network capacity is maximized? (2)What is the relation between the transmit power and the carrier sense threshold.In this paper, we study both problems, and show that (i)in the case that the achievable channel rate follows the Shannon capacity, spatial reuse depends only on the ratio of the transmit power to the carrier sense threshold; and (ii) in the case that only a set of discrete data rates are available, tuning the transmit power offers several advantages that tuning the carrier sense threshold cannot, provided that there is a sufficient number of power levels available. Based on the findings, we then propose a decentralized power and rate control algorithm to enable each node to adjust, based on its signal interference level, its transmit power and data rate. The transmit power is so determined that the transmitter can sustain a high data rate, while keeping the adverse interference effect on the other neighboring concurrent transmissions minimal. Simulation results have shown that, as compared to existing carrier sense threshold tuning algorithms, the proposed power and rate control algorithm yields higher network capacity."
]
}
|
1309.3716
|
1607297408
|
In this paper we study a transmission power tune problem with densely deployed 802.11 Wireless Local Area Networks (WLANs). While previous papers emphasize on tuning transmission power with either PHY or MAC layer separately, optimally setting each Access Point's (AP's) transmission power of a densely deployed 802.11 network considering its dual effects on both layers remains an open problem. In this work, we design a measure by evaluating impacts of transmission power on network performance on both PHY and MAC layers. We show that such an optimization problem is intractable and then we investigate and develop an analytical framework to allow simple yet efficient solutions. Through simulations and numerical results, we observe clear benefits of the dual-effect model compared to solutions optimizing solely on a single layer; therefore, we conclude that tuning transmission power from a dual layer (PHY-MAC) point of view is essential and necessary for dense WLANs. We further form a game theoretical framework and investigate above power-tune problem in a strategic network. We show that with decentralized and strategic users, the Nash Equilibrium (N.E.) of the corresponding game is in-efficient and thereafter we propose a punishment based mechanism to enforce users to adopt the social optimal strategy profile under both perfect and imperfect sensing environments.
|
More recently, power control problems have been analyzed under game theoretical framework. proposed a game theoretical approach for decentralized power allocation in @cite_0 . In @cite_1 , a congestion game model is proposed to analyze power control problem as a form of resource allocation. Equilibrium strategies have been given under certain assumptions. In @cite_5 , a power control problem is modeled as repeated games with strategic users and intervention theory is proposed to induce target strategy from users. Imperfect monitoring repeated game model is analyzed in @cite_6 with the assumption of a Local Spectrum Server (LSS). In @cite_2 , consider a power control problem w.r.t. reducing contention order on the link layer while keeping the physical layer interference under certain levels.
|
{
"cite_N": [
"@cite_1",
"@cite_6",
"@cite_0",
"@cite_2",
"@cite_5"
],
"mid": [
"2056237576",
"1967962702",
"2069492423",
"2073221003",
"2952139250"
],
"abstract": [
"In this paper, we introduce and analyze the properties of a class of games, the atomic congestion games on graphs (ACGGs), which is a generalization of the classical congestion games. In particular, an ACGG captures the spatial information that is often ignored in a classical congestion game. This is useful in many networking problems, e.g., wireless networks where interference among the users heavily depends on the spatial information. In an ACGG, a player's payoff for using a resource is a function of the number of players who interact with it and use the same resource. Such spatial information can be captured by a graph. We study fundamental properties of the ACGGs: under what conditions these games possess a pure strategy Nash equilibrium (PNE), or the finite improvement property (FIP), which is sufficient for the existence of a PNE. We show that a PNE may not exist in general, but that it does exist in many important special cases including tree, loop, or regular bipartite networks. The FIP holds for important special cases including systems with two resources or identical payoff functions for each resource. Finally, we present two wireless network applications of ACGGs: power control and channel contention under IEEE 802.11.",
"We develop a novel design framework for dynamic distributed spectrum sharing among secondary users (SUs), who adjust their power levels to compete for spectrum opportunities while satisfying the interference temperature (IT) constraints imposed by primary users. The considered interaction among the SUs is characterized by the following three unique features. First, the SUs are interacting with each other repeatedly and they can coexist in the system for a long time. Second, the SUs have limited and imperfect monitoring ability: they only observe whether the IT constraints are violated, and their observation is imperfect due to the erroneous measurements. Third, since the SUs are decentralized, they are selfish and aim to maximize their own long-term payoffs from utilizing the network rather than obeying the prescribed allocation of a centralized controller. To capture these unique features, we model the interaction of the SUs as a repeated game with imperfect monitoring. We first characterize the set of Pareto optimal operating points that can be achieved by deviation-proof spectrum sharing policies, which are policies that the selfish users find it in their interest to comply with. Next, for any given operating point in this set, we show how to construct a deviation-proof policy to achieve it. The constructed deviation-proof policy is amenable to distributed implementation, and allows users to transmit in a time-division multiple-access (TDMA) fashion. In the presence of strong multi-user interference, our policy outperforms existing spectrum sharing policies that dictate users to transmit at constant power levels simultaneously. Moreover, our policy can achieve Pareto optimality even when the SUs have limited and imperfect monitoring ability, as opposed to existing solutions based on repeated game models, which require perfect monitoring abilities. Simulation results validate our analytical results and quantify the performance gains enabled by the proposed spectrum sharing policies.",
"The rapidly growing demand for wireless communication makes efficient power allocation a critical factor in the network's efficient operation. Power allocation in cellular networks with interference, where users are selfish, has been recently studied by pricing methods. However, pricing methods do not result in efficient optimal power allocations for such systems for the following reason. Because of interference, the communication between the Base Station (BS) and a given user is affected by that between the BS and all other users. Thus, the power vector consisting of the transmission power in each BS-user link can be viewed as a public good which simultaneously affects the utilities of all the users in the network. It is well known (Mas-, Microeconomic Theory, Oxford University Press, London, 2002, Chap. 11.C) that in public good economies, standard efficiency theorems on market equilibrium do not apply and pricing mechanisms do not result in globally optimal allocations. In this paper we study power allocation in the presence of interference for a single cell wireless Code Division Multiple Access (CDMA) network from a game theoretic perspective. We consider a network where each user knows only its own utility and the channel gain from the base station to itself. We formulate the uplink power allocation problem as a public good allocation problem. We present a game form the Nash Equilibria of which yield power allocations that are optimal solutions of the corresponding centralized uplink network.",
"This paper addresses the joint selection and power assignment of a largest set of given links which can communicate successfully at the same time under the physical interference model in the duplex (i.e. bidirectional) mode. For the special setting in which all nodes have unlimited maximum transmission power, Halldorsson and Mitra [5] developed an approximation algorithm with a huge constant approximation bound. For the general setting in which all nodes have bounded maximum transmission power, the existence of constant approximation algorithm remains open. In this paper, we resolve this open problem by developing an approximation algorithm which not only works for the general setting of bounded maximum transmission power, but also has a much smaller constant approximation bound.",
"In communication systems where users share common resources, users' selfish behavior usually results in suboptimal resource utilization. There have been extensive works that model communication systems with selfish users as one-shot games and propose incentive schemes to achieve Pareto optimal action profiles as non-cooperative equilibria. However, in many communication systems, due to strong negative externalities among users, the sets of feasible payoffs in one-shot games are nonconvex. Thus, it is possible to expand the set of feasible payoffs by having users choose convex combinations of different payoffs. In this paper, we propose a repeated game model generalized by intervention. First, we use repeated games to convexify the set of feasible payoffs in one-shot games. Second, we combine conventional repeated games with intervention, originally proposed for one-shot games, to achieve a larger set of equilibrium payoffs and loosen requirements for users' patience to achieve it. We study the problem of maximizing a welfare function defined on users' equilibrium payoffs, subject to minimum payoff guarantees. Given the optimal equilibrium payoff, we derive the minimum intervention capability required and design corresponding equilibrium strategies. The proposed generalized repeated game model applies to various communication systems, such as power control and flow control."
]
}
|
1309.3716
|
1607297408
|
In this paper we study a transmission power tune problem with densely deployed 802.11 Wireless Local Area Networks (WLANs). While previous papers emphasize on tuning transmission power with either PHY or MAC layer separately, optimally setting each Access Point's (AP's) transmission power of a densely deployed 802.11 network considering its dual effects on both layers remains an open problem. In this work, we design a measure by evaluating impacts of transmission power on network performance on both PHY and MAC layers. We show that such an optimization problem is intractable and then we investigate and develop an analytical framework to allow simple yet efficient solutions. Through simulations and numerical results, we observe clear benefits of the dual-effect model compared to solutions optimizing solely on a single layer; therefore, we conclude that tuning transmission power from a dual layer (PHY-MAC) point of view is essential and necessary for dense WLANs. We further form a game theoretical framework and investigate above power-tune problem in a strategic network. We show that with decentralized and strategic users, the Nash Equilibrium (N.E.) of the corresponding game is in-efficient and thereafter we propose a punishment based mechanism to enforce users to adopt the social optimal strategy profile under both perfect and imperfect sensing environments.
|
In terms of computation, for standard integer optimization (or combinatorial optimization) problems researchers typically seek relaxation to convert the problem into a continuous problem in the hope it can be solved by standard LP or convex algorithms; in @cite_8 @cite_7 @cite_11 , efficient search algorithms have been proposed to tackle finite space optimization problems.
|
{
"cite_N": [
"@cite_11",
"@cite_7",
"@cite_8"
],
"mid": [
"2080834015",
"2024060531",
"546552"
],
"abstract": [
"The throughput maximization problem of wireless mesh access networks is addressed. For the case of cooperative access points, we present a negotiation-based throughput maximization algorithm which adjusts the operating frequency and power level among access points autonomously, from a game-theoretical perspective. We show that this algorithm converges to the optimal frequency and power assignment which yields the maximum overall throughput with arbitrarily high probability. Moreover, we analyze the scenario where access points belong to different regulation entities and hence non-cooperative. The long-term behavior and corresponding performance are investigated and the analytical results are verified by simulations.",
"There is a deep and useful connection between statistical mechanics (the behavior of systems with many degrees of freedom in thermal equilibrium at a finite temperature) and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters). A detailed analogy with annealing in solids provides a framework for optimization of the properties of very large and complex systems. This connection to statistical mechanics exposes new information and provides an unfamiliar perspective on traditional optimization problems and methods.",
""
]
}
|
1309.4016
|
2949707104
|
The Internet Archive's (IA) Wayback Machine is the largest and oldest public web archive and has become a significant repository of our recent history and cultural heritage. Despite its importance, there has been little research about how it is discovered and used. Based on web access logs, we analyze what users are looking for, why they come to IA, where they come from, and how pages link to IA. We find that users request English pages the most, followed by the European languages. Most human users come to web archives because they do not find the requested pages on the live web. About 65 of the requested archived pages no longer exist on the live web. We find that more than 82 of human sessions connect to the Wayback Machine via referrals from other web sites, while only 15 of robots have referrers. Most of the links (86 ) from websites are to individual archived pages at specific points in time, and of those 83 no longer exist on the live web.
|
Many studies have investigated what is missing from digital libraries and web archives, in addition to the effect of this on the satisfaction of users' needs and expectations @cite_0 @cite_17 @cite_19 @cite_4 . In @cite_0 , the Internet Archive's coverage of the web was investigated. The results showed an unintentional international bias through uneven representation of different countries in the archive. @ @cite_17 suggest a tool to dynamically analyze the query logs of the digital library system, identify the missing content queries, and then direct the system to obtain the missing data. We investigate what is missing through an analysis of requests with an HTTP 404 status in the Wayback Machine web server logs.
|
{
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_4",
"@cite_17"
],
"mid": [
"2136963875",
"2100612254",
"2084989254",
"2020993528"
],
"abstract": [
"The Internet Archive, an important initiative that maintains a record of the evolving Web, has the promise of being a key resource for historians and those who study the Web itself. The archive's goal is to index the whole Web without making any judgments about which pages are worth saving. The potential importance of the archive for longitudinal and historical Web research leads to the need to evaluate its coverage. This article focuses upon whether there is an international bias in its coverage. The results show that there are indeed large national differences in the archive's coverage of the Web. A subsequent statistical analysis found differing national average site ages and hyperlink structures to be plausible explanations for this uneven coverage. Although the bias is unintentional, researchers using the archive in the future need to be aware of this problem.",
"Some large scale topical digital libraries, such as CiteSeer, harvest online academic documents by crawling open-access archives, university and author homepages, and authors' self-submissions. While these approaches have so far built reasonable size libraries, they can suffer from having only a portion of the documents from specific publishing venues. We propose to use alternative online resources and techniques that maximally exploit other resources to build the complete document collection of any given publication venue. We investigate the feasibility of using publication metadata to guide the crawler towards authors' homepages to harvest what is missing from a digital library collection. We collect a real-world dataset from two Computer Science publishing venues, involving a total of 593 unique authors over a time frame of 1998 to 2004. We then identify the missing papers that are not indexed by CiteSeer. Using a fully automatic heuristic-based system that has the capability of locating authors' homepages and then using focused crawling to download the desired papers, we demonstrate that it is practical to harvest using a focused crawler academic papers that are missing from our digital library. Our harvester achieves a performance with an average recall level of 0.82 overall and 0.75 for those missing documents. Evaluation of the crawler's performance based on the harvest rate shows definite advantages over other crawling approaches and consistently outperforms a defined baseline crawler on a number of measures",
"This article proposes a process to retrieve the URL of a document for which metadata records exist in a digital library catalog but a pointer to the full text of the document is not available. The process uses results from queries submitted to Web search engines for finding the URL of the corresponding full text or any related material. We present a comprehensive study of this process in different situations by investigating different query strategies applied to three general purpose search engines (Google, Yahoo!, MSN) and two specialized ones (Scholar and CiteSeer), considering five user scenarios. Specifically, we have conducted experiments with metadata records taken from the Brazilian Digital Library of Computing (BDBComp) and The DBLP Computer Science Bibliography (DBLP). We found that Scholar was the most effective search engine for this task in all considered scenarios and that simple strategies for combining and re-ranking results from Scholar and Google significantly improve the retrieval quality. Moreover, we study the influence of the number of query results on the effectiveness of finding missing information as well as the coverage of the proposed scenarios.",
"This work shows how the content of a digital library can be enhanced to better satisfy its users' needs. Missing content is identified by finding missing content topics in the system's query log or in a pre-defined taxonomy of required knowledge. The collection is then enhanced with new relevant knowledge, which is extracted from external sources that satisfy those missing content topics. Experiments we conducted measure the precision of the system before and after content enhancement. The results demonstrate a significant improvement in the system effectiveness as a result of content enhancement and the superiority of the missing content enhancement policy over several other possible policies."
]
}
|
1309.4016
|
2949707104
|
The Internet Archive's (IA) Wayback Machine is the largest and oldest public web archive and has become a significant repository of our recent history and cultural heritage. Despite its importance, there has been little research about how it is discovered and used. Based on web access logs, we analyze what users are looking for, why they come to IA, where they come from, and how pages link to IA. We find that users request English pages the most, followed by the European languages. Most human users come to web archives because they do not find the requested pages on the live web. About 65 of the requested archived pages no longer exist on the live web. We find that more than 82 of human sessions connect to the Wayback Machine via referrals from other web sites, while only 15 of robots have referrers. Most of the links (86 ) from websites are to individual archived pages at specific points in time, and of those 83 no longer exist on the live web.
|
* Memento Terminology In this section, we explain the terminology we adopt in the rest of the paper. Memento @cite_16 is an HTTP protocol extension which enables time travel on the web by linking the current resources with their prior state. Memento defines the following terms: URI-R identifies the original resource. It is the resource as it used to appear on the live web. A URI-R may have 0 or more mementos (URI-Ms). URI-M identifies an archived snapshot of the URI-R at a specific datetime, which is called Memento-Datetime, e.g., URI-M @math = URI-R @math . URI-T identifies a TimeMap, a resource that provides a list of mementos (URI-Ms) for a URI-R with their Memento-Datetimes, e.g., @math . Although we use Memento terminology, the logs we analyze are from the Internet Archive's Wayback Machine and not the Memento API.
|
{
"cite_N": [
"@cite_16"
],
"mid": [
"1510484544"
],
"abstract": [
"The Web is ephemeral. Many resources have representations that change over time, and many of those representations are lost forever. A lucky few manage to reappear as archived resources that carry their own URIs. For example, some content management systems maintain version pages that reect a frozen prior state of their changing resources. Archives recurrently crawl the web to obtain the actual representation of resources, and subsequently make those available via special-purpose archived resources. In both cases, the archival copies have URIs that are protocolwise disconnected from the URI of the resource of which they represent a prior state. Indeed, the lack of temporal capabilities in the most common Web protocol, HTTP, prevents getting to an archived resource on the basis of the URI of its original. This turns accessing archived resources into a signicant discovery challenge for both human and software agents, which typically involves following a multitude of links from the original to the archival resource, or of searching archives for the original URI. This paper proposes the protocol-based Memento solution to address this problem, and describes a proof-of-concept experiment that includes major servers of archival content, including Wikipedia and the Internet Archive. The Memento solution is based on existing HTTP capabilities applied in a novel way to add the temporal dimension. The result is a framework in which archived resources can seamlessly be reached via the URI of their original: protocol-based time travel for the Web."
]
}
|
1309.4035
|
2127002961
|
Given appropriate representations of the semantic relations between carpenter and wood and between mason and stone (for example, vectors in a vector space model), a suitable algorithm should be able to recognize that these relations are highly similar (carpenter is to wood as mason is to stone; the relations are analogous). Likewise, with representations of dog, house, and kennel, an algorithm should be able to recognize that the semantic composition of dog and house, dog house, is highly similar to kennel (dog house and kennel are synonymous). It seems that these two tasks, recognizing relations and compositions, are closely connected. However, up to now, the best models for relations are significantly different from the best models for compositions. In this paper, we introduce a dual-space model that unifies these two tasks. This model matches the performance of the best previous models for relations and compositions. The dual-space model consists of a space for measuring domain similarity and a space for measuring function similarity. Carpenter and wood share the same domain, the domain of carpentry. Mason and stone share the same domain, the domain of masonry. Carpenter and mason share the same function, the function of artisans. Wood and stone share the same function, the function of materials. In the composition dog house, kennel has some domain overlap with both dog and house (the domains of pets and buildings). The function of kennel is similar to the function of house (the function of shelters). By combining domain and function similarities in various ways, we can model relations, compositions, and other aspects of semantics.
|
Information scalability: For phrases, the model should scale up with neither loss of information nor exponential growth in representation size as the number of component words in the phrases increases. For @math -ary semantic relations @cite_23 , the model should scale up with neither loss of information nor exponential growth in representation size as @math , the number of terms in the relations, increases.
|
{
"cite_N": [
"@cite_23"
],
"mid": [
"2167609405"
],
"abstract": [
"Many AI researchers and cognitive scientists have argued that analogy is the core of cognition. The most influential work on computational modeling of analogy-making is Structure Mapping Theory (SMT) and its implementation in the Structure Mapping Engine (SME). A limitation of SME is the requirement for complex hand-coded representations. We introduce the Latent Relation Mapping Engine (LRME), which combines ideas from SME and Latent Relational Analysis (LRA) in order to remove the requirement for hand-coded representations. LRME builds analogical mappings between lists of words, using a large corpus of raw text to automatically discover the semantic relations among the words. We evaluate LRME on a set of twenty analogical mapping problems, ten based on scientific analogies and ten based on common metaphors. LRME achieves human-level performance on the twenty problems. We compare LRME with a variety of alternative approaches and find that they are not able to reach the same level of performance."
]
}
|
1309.4035
|
2127002961
|
Given appropriate representations of the semantic relations between carpenter and wood and between mason and stone (for example, vectors in a vector space model), a suitable algorithm should be able to recognize that these relations are highly similar (carpenter is to wood as mason is to stone; the relations are analogous). Likewise, with representations of dog, house, and kennel, an algorithm should be able to recognize that the semantic composition of dog and house, dog house, is highly similar to kennel (dog house and kennel are synonymous). It seems that these two tasks, recognizing relations and compositions, are closely connected. However, up to now, the best models for relations are significantly different from the best models for compositions. In this paper, we introduce a dual-space model that unifies these two tasks. This model matches the performance of the best previous models for relations and compositions. The dual-space model consists of a space for measuring domain similarity and a space for measuring function similarity. Carpenter and wood share the same domain, the domain of carpentry. Mason and stone share the same domain, the domain of masonry. Carpenter and mason share the same function, the function of artisans. Wood and stone share the same function, the function of materials. In the composition dog house, kennel has some domain overlap with both dog and house (the domains of pets and buildings). The function of kennel is similar to the function of house (the function of shelters). By combining domain and function similarities in various ways, we can model relations, compositions, and other aspects of semantics.
|
Let @math be a phrase, such as a noun-modifier phrase, and assume that we have vectors @math and @math that represent the component words @math and @math . One of the earliest proposals for semantic composition is to represent @math by the vector @math that is the average of @math and @math @cite_56 . If we are using a cosine measure of vector similarity, taking the average of a set of vectors (or their centroid) is the same as adding the vectors, @math . Vector addition works relatively well in practice @cite_38 @cite_5 , although it lacks order sensitivity, adaptive capacity, and information scalability. Regarding order sensitivity and adaptive capacity, mitchell08,mitchell10 suggest using weights, @math , and tuning the weights to different values for different syntactic relations. In their experiments @cite_5 , weighted addition performed better than unweighted addition.
|
{
"cite_N": [
"@cite_38",
"@cite_5",
"@cite_56"
],
"mid": [
"2137607259",
"1984052055",
"1983578042"
],
"abstract": [
"This paper proposes a framework for representing the meaning of phrases and sentences in vector space. Central to our approach is vector composition which we operationalize in terms of additive and multiplicative functions. Under this framework, we introduce a wide range of composition models which we evaluate empirically on a sentence similarity task. Experimental results demonstrate that the multiplicative models are superior to the additive alternatives when compared against human judgments.",
"Vector-based models of word meaning have become increasingly popular in cognitive science. The appeal of these models lies in their ability to represent meaning simply by using distributional information under the assumption that words occurring within similar contexts are semantically similar. Despite their widespread use, vector-based models are typically directed at representing words in isolation, and methods for constructing representations for phrases or sentences have received little attention in the literature. This is in marked contrast to experimental evidence (e.g., in sentential priming) suggesting that semantic similarity is more complex than simply a relation between isolated words. This article proposes a framework for representing the meaning of word combinations in vector space. Central to our approach is vector composition, which we operationalize in terms of additive and multiplicative functions. Under this framework, we introduce a wide range of composition models that we evaluate empirically on a phrase similarity task.",
"How do people know as much as they do with as little information as they get? The problem takes many forms; learning vocabulary from text is an especially dramatic and convenient case for research. A new general theory of acquired similarity and knowledge representation, latent semantic analysis (LSA), is presented and used to successfully simulate such learning and several other psycholinguistic phenomena. By inducing global knowledge indirectly from local co-occurrence data in a large body of representative text, LSA acquired knowledge about the full vocabulary of English at a comparable rate to schoolchildren. LSA uses no prior linguistic or perceptual similarity knowledge; it is based solely on a general mathematical learning method that achieves powerful inductive effects by extracting the right number of dimensions (e.g., 300) to represent objects and contexts. Relations to other theories, phenomena, and problems are sketched."
]
}
|
1309.4035
|
2127002961
|
Given appropriate representations of the semantic relations between carpenter and wood and between mason and stone (for example, vectors in a vector space model), a suitable algorithm should be able to recognize that these relations are highly similar (carpenter is to wood as mason is to stone; the relations are analogous). Likewise, with representations of dog, house, and kennel, an algorithm should be able to recognize that the semantic composition of dog and house, dog house, is highly similar to kennel (dog house and kennel are synonymous). It seems that these two tasks, recognizing relations and compositions, are closely connected. However, up to now, the best models for relations are significantly different from the best models for compositions. In this paper, we introduce a dual-space model that unifies these two tasks. This model matches the performance of the best previous models for relations and compositions. The dual-space model consists of a space for measuring domain similarity and a space for measuring function similarity. Carpenter and wood share the same domain, the domain of carpentry. Mason and stone share the same domain, the domain of masonry. Carpenter and mason share the same function, the function of artisans. Wood and stone share the same function, the function of materials. In the composition dog house, kennel has some domain overlap with both dog and house (the domains of pets and buildings). The function of kennel is similar to the function of house (the function of shelters). By combining domain and function similarities in various ways, we can model relations, compositions, and other aspects of semantics.
|
mitchell08,mitchell10 suggest element-wise multiplication as a composition operation, @math , where @math . Like vector addition, element-wise multiplication suffers from a lack of order sensitivity, adaptive capacity, and information scalability. Nonetheless, in an experimental evaluation of seven compositional models and two noncompositional models, element-wise multiplication had the best performance @cite_5 .
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"1984052055"
],
"abstract": [
"Vector-based models of word meaning have become increasingly popular in cognitive science. The appeal of these models lies in their ability to represent meaning simply by using distributional information under the assumption that words occurring within similar contexts are semantically similar. Despite their widespread use, vector-based models are typically directed at representing words in isolation, and methods for constructing representations for phrases or sentences have received little attention in the literature. This is in marked contrast to experimental evidence (e.g., in sentential priming) suggesting that semantic similarity is more complex than simply a relation between isolated words. This article proposes a framework for representing the meaning of word combinations in vector space. Central to our approach is vector composition, which we operationalize in terms of additive and multiplicative functions. Under this framework, we introduce a wide range of composition models that we evaluate empirically on a phrase similarity task."
]
}
|
1309.4035
|
2127002961
|
Given appropriate representations of the semantic relations between carpenter and wood and between mason and stone (for example, vectors in a vector space model), a suitable algorithm should be able to recognize that these relations are highly similar (carpenter is to wood as mason is to stone; the relations are analogous). Likewise, with representations of dog, house, and kennel, an algorithm should be able to recognize that the semantic composition of dog and house, dog house, is highly similar to kennel (dog house and kennel are synonymous). It seems that these two tasks, recognizing relations and compositions, are closely connected. However, up to now, the best models for relations are significantly different from the best models for compositions. In this paper, we introduce a dual-space model that unifies these two tasks. This model matches the performance of the best previous models for relations and compositions. The dual-space model consists of a space for measuring domain similarity and a space for measuring function similarity. Carpenter and wood share the same domain, the domain of carpentry. Mason and stone share the same domain, the domain of masonry. Carpenter and mason share the same function, the function of artisans. Wood and stone share the same function, the function of materials. In the composition dog house, kennel has some domain overlap with both dog and house (the domains of pets and buildings). The function of kennel is similar to the function of house (the function of shelters). By combining domain and function similarities in various ways, we can model relations, compositions, and other aspects of semantics.
|
Another approach is to use a tensor product for composition @cite_49 @cite_26 @cite_57 @cite_58 , such as the outer product, @math . The outer product of two vectors ( @math and @math ), each with @math elements, is an @math matrix ( @math ). The outer product of three vectors is an @math third-order tensor. This results in an information scalability problem: The representations grow exponentially large as the phrases grow longer. There are ways to avoid the exponential growth; for example, a third-order tensor with a rank of 1 on all three modes may be compactly encoded by its three component vectors. kolda09 discuss compact tensor representations. Furthermore, the outer product did not perform as well as element-wise multiplication in mitchell10 experiments. Recent work with tensor products @cite_34 @cite_10 has attempted to address the issue of information scalability.
|
{
"cite_N": [
"@cite_26",
"@cite_34",
"@cite_57",
"@cite_49",
"@cite_58",
"@cite_10"
],
"mid": [
"2001379035",
"",
"2193700427",
"2013494846",
"",
"1845242646"
],
"abstract": [
"Modern approaches to semantic analysis if reformulated as Hilbert-space problems reveal formal structures known from quantum mechanics. A similar situation is found in distributed representations of cognitive structures developed for the purpose of neural networks. We take a closer look at similarities and differences between the above two fields and quantum information theory.",
"",
"The are two main approaches to the representation of meaning in Computational Linguistics: a symbolic approach and a distributional approach. This paper considers the fundamental question of how these approaches might be combined. The proposal is to adapt a method from the Cognitive Science literature, in which symbolic and connectionist representations are combined using tensor products. Possible applications of this method for language processing are described. Finally, a potentially fruitful link between Quantum Mechanics, Computational Linguistics, and other related areas such as Information Retrieval and Machine Learning, is proposed.",
"Abstract A general method, the tensor product representation, is defined for the connectionist representation of value variable bindings. The technique is a formalization of the idea that a set of value variable pairs can be represented by accumulating activity in a collection of units each of which computes the product of a feature of a variable and a feature of its value. The method allows the fully distributed representation of bindings and symbolic structures. Fully and partially localized special cases of the tensor product representation reduce to existing cases of connectionist representations of structured data. The representation rests on a principled analysis of structure; it saturates gracefully as larger structures are represented; it permits recursive construction of complex representations from simpler ones; it respects the independence of the capacities to generate and maintain multiple bindings in parallel; it extends naturally to continuous structures and continuous representational patterns; it permits values to also serve as variables; and it enables analysis of the interference of symbolic structures stored in associative memories. It has also served as the basis for working connectionist models of high-level cognitive tasks.",
"",
"Formal and distributional semantic models offer complementary benefits in modeling meaning. The categorical compositional distributional model of meaning of (2010) (abbreviated to DisCoCat in the title) combines aspects of both to provide a general framework in which meanings of words, obtained distributionally, are composed using methods from the logical setting to form sentence meaning. Concrete consequences of this general abstract setting and applications to empirical data are under active study (, 2011; Grefenstette and Sadrzadeh, 2011). In this paper, we extend this study by examining transitive verbs, represented as matrices in a DisCoCat. We discuss three ways of constructing such matrices, and evaluate each method in a disambiguation task developed by Grefenstette and Sadrzadeh (2011)."
]
}
|
1309.4035
|
2127002961
|
Given appropriate representations of the semantic relations between carpenter and wood and between mason and stone (for example, vectors in a vector space model), a suitable algorithm should be able to recognize that these relations are highly similar (carpenter is to wood as mason is to stone; the relations are analogous). Likewise, with representations of dog, house, and kennel, an algorithm should be able to recognize that the semantic composition of dog and house, dog house, is highly similar to kennel (dog house and kennel are synonymous). It seems that these two tasks, recognizing relations and compositions, are closely connected. However, up to now, the best models for relations are significantly different from the best models for compositions. In this paper, we introduce a dual-space model that unifies these two tasks. This model matches the performance of the best previous models for relations and compositions. The dual-space model consists of a space for measuring domain similarity and a space for measuring function similarity. Carpenter and wood share the same domain, the domain of carpentry. Mason and stone share the same domain, the domain of masonry. Carpenter and mason share the same function, the function of artisans. Wood and stone share the same function, the function of materials. In the composition dog house, kennel has some domain overlap with both dog and house (the domains of pets and buildings). The function of kennel is similar to the function of house (the function of shelters). By combining domain and function similarities in various ways, we can model relations, compositions, and other aspects of semantics.
|
Circular convolution is similar to the outer product, but the outer product matrix is compressed back down to a vector, @math @cite_13 @cite_42 . This avoids information explosion, but it results in information loss. Circular convolution performed poorly in mitchell10 experiments.
|
{
"cite_N": [
"@cite_42",
"@cite_13"
],
"mid": [
"1986707196",
"2157306293"
],
"abstract": [
"The authors present a computational model that builds a holographic lexicon representing both word meaning and word order from unsupervised experience with natural language. The model uses simple convolution and superposition mechanisms (cf. B. B. Murdock, 1982) to learn distributed holographic representations for words. The structure of the resulting lexicon can account for empirical data from classic experiments studying semantic typicality, categorization, priming, and semantic constraint in sentence completions. Furthermore, order information can be retrieved from the holographic representations, allowing the model to account for limited word transitions without the need for built-in transition rules. The model demonstrates that a broad range of psychological data can be accounted for directly from the structure of lexical representations learned in this way, without the need for complexity to be built into either the processing mechanisms or the representations. The holographic representations are an appropriate knowledge representation to be used by higher order models of language comprehension, relieving the complexity required at the higher level.",
"Associative memories are conventionally used to represent data with very simple structure: sets of pairs of vectors. This paper describes a method for representing more complex compositional structure in distributed representations. The method uses circular convolution to associate items, which are represented by vectors. Arbitrary variable bindings, short sequences of various lengths, simple frame-like structures, and reduced representations can be represented in a fixed width vector. These representations are items in their own right and can be used in constructing compositional structures. The noisy reconstructions extracted from convolution memories can be cleaned up by using a separate associative memory that has good reconstructive properties. >"
]
}
|
1309.4035
|
2127002961
|
Given appropriate representations of the semantic relations between carpenter and wood and between mason and stone (for example, vectors in a vector space model), a suitable algorithm should be able to recognize that these relations are highly similar (carpenter is to wood as mason is to stone; the relations are analogous). Likewise, with representations of dog, house, and kennel, an algorithm should be able to recognize that the semantic composition of dog and house, dog house, is highly similar to kennel (dog house and kennel are synonymous). It seems that these two tasks, recognizing relations and compositions, are closely connected. However, up to now, the best models for relations are significantly different from the best models for compositions. In this paper, we introduce a dual-space model that unifies these two tasks. This model matches the performance of the best previous models for relations and compositions. The dual-space model consists of a space for measuring domain similarity and a space for measuring function similarity. Carpenter and wood share the same domain, the domain of carpentry. Mason and stone share the same domain, the domain of masonry. Carpenter and mason share the same function, the function of artisans. Wood and stone share the same function, the function of materials. In the composition dog house, kennel has some domain overlap with both dog and house (the domains of pets and buildings). The function of kennel is similar to the function of house (the function of shelters). By combining domain and function similarities in various ways, we can model relations, compositions, and other aspects of semantics.
|
One application for semantic composition is measuring the similarity of phrases @cite_3 @cite_5 . Kernel methods have been applied to the closely related task of identifying paraphrases @cite_8 , but the emphasis with kernel methods is on syntactic similarity, rather than semantic similarity.
|
{
"cite_N": [
"@cite_5",
"@cite_3",
"@cite_8"
],
"mid": [
"1984052055",
"1973942085",
"2110433488"
],
"abstract": [
"Vector-based models of word meaning have become increasingly popular in cognitive science. The appeal of these models lies in their ability to represent meaning simply by using distributional information under the assumption that words occurring within similar contexts are semantically similar. Despite their widespread use, vector-based models are typically directed at representing words in isolation, and methods for constructing representations for phrases or sentences have received little attention in the literature. This is in marked contrast to experimental evidence (e.g., in sentential priming) suggesting that semantic similarity is more complex than simply a relation between isolated words. This article proposes a framework for representing the meaning of word combinations in vector space. Central to our approach is vector composition, which we operationalize in terms of additive and multiplicative functions. Under this framework, we introduce a wide range of composition models that we evaluate empirically on a phrase similarity task.",
"We address the task of computing vector space representations for the meaning of word occurrences, which can vary widely according to context. This task is a crucial step towards a robust, vector-based compositional account of sentence meaning. We argue that existing models for this task do not take syntactic structure sufficiently into account. We present a novel structured vector space model that addresses these issues by incorporating the selectional preferences for words' argument positions. This makes it possible to integrate syntax into the computation of word meaning in context. In addition, the model performs at and above the state of the art for modeling the contextual adequacy of paraphrases.",
"Natural Language Processing (NLP) for Information Retrieval has always been an interesting and challenging research area. Despite the high expectations, most of the results indicate that successfully using NLP is very complex. In this paper, we show how Support Vector Machines along with kernel functions can effectively represent syntax and semantics. Our experiments on question answer classification show that the above models highly improve on bag-of-words on a TREC dataset."
]
}
|
1309.4035
|
2127002961
|
Given appropriate representations of the semantic relations between carpenter and wood and between mason and stone (for example, vectors in a vector space model), a suitable algorithm should be able to recognize that these relations are highly similar (carpenter is to wood as mason is to stone; the relations are analogous). Likewise, with representations of dog, house, and kennel, an algorithm should be able to recognize that the semantic composition of dog and house, dog house, is highly similar to kennel (dog house and kennel are synonymous). It seems that these two tasks, recognizing relations and compositions, are closely connected. However, up to now, the best models for relations are significantly different from the best models for compositions. In this paper, we introduce a dual-space model that unifies these two tasks. This model matches the performance of the best previous models for relations and compositions. The dual-space model consists of a space for measuring domain similarity and a space for measuring function similarity. Carpenter and wood share the same domain, the domain of carpentry. Mason and stone share the same domain, the domain of masonry. Carpenter and mason share the same function, the function of artisans. Wood and stone share the same function, the function of materials. In the composition dog house, kennel has some domain overlap with both dog and house (the domains of pets and buildings). The function of kennel is similar to the function of house (the function of shelters). By combining domain and function similarities in various ways, we can model relations, compositions, and other aspects of semantics.
|
Neural network models have been combined with vector space models for the task of language modeling @cite_6 @cite_47 @cite_65 , with impressive results. The goal of a language model is to estimate the probability of a phrase or to decide which of several phrases is the most likely. VSMs can improve the probability estimates of a language model by measuring the similarity of the words in the phrases and smoothing probabilities over groups of similar words. However, in a language model, words are considered similar to the degree that they can be exchanged without altering the probability of a given phrase, without regard to whether the exchange alters the meaning of the phrase. This is like function similarity, which measures the degree to which words have similar functional roles, but these language models are missing anything like domain similarity.
|
{
"cite_N": [
"@cite_47",
"@cite_65",
"@cite_6"
],
"mid": [
"",
"2103305545",
"2132339004"
],
"abstract": [
"",
"Paraphrase detection is the task of examining two sentences and determining whether they have the same meaning. In order to obtain high accuracy on this task, thorough syntactic and semantic analysis of the two statements is needed. We introduce a method for paraphrase detection based on recursive autoencoders (RAE). Our unsupervised RAEs are based on a novel unfolding objective and learn feature vectors for phrases in syntactic trees. These features are used to measure the word- and phrase-wise similarity between two sentences. Since sentences may be of arbitrary length, the resulting matrix of similarity measures is of variable size. We introduce a novel dynamic pooling layer which computes a fixed-sized representation from the variable-sized matrices. The pooled representation is then used as input to a classifier. Our method outperforms other state-of-the-art approaches on the challenging MSRP paraphrase corpus.",
"A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts."
]
}
|
1309.4035
|
2127002961
|
Given appropriate representations of the semantic relations between carpenter and wood and between mason and stone (for example, vectors in a vector space model), a suitable algorithm should be able to recognize that these relations are highly similar (carpenter is to wood as mason is to stone; the relations are analogous). Likewise, with representations of dog, house, and kennel, an algorithm should be able to recognize that the semantic composition of dog and house, dog house, is highly similar to kennel (dog house and kennel are synonymous). It seems that these two tasks, recognizing relations and compositions, are closely connected. However, up to now, the best models for relations are significantly different from the best models for compositions. In this paper, we introduce a dual-space model that unifies these two tasks. This model matches the performance of the best previous models for relations and compositions. The dual-space model consists of a space for measuring domain similarity and a space for measuring function similarity. Carpenter and wood share the same domain, the domain of carpentry. Mason and stone share the same domain, the domain of masonry. Carpenter and mason share the same function, the function of artisans. Wood and stone share the same function, the function of materials. In the composition dog house, kennel has some domain overlap with both dog and house (the domains of pets and buildings). The function of kennel is similar to the function of house (the function of shelters). By combining domain and function similarities in various ways, we can model relations, compositions, and other aspects of semantics.
|
erk08 model and related models @cite_35 address linguistic creativity, order sensitivity, adaptive capacity, and information scalability, but they are not suitable for measuring the similarity of semantic relations. Consider the analogy traffic is to street as water is to riverbed . Let @math represent traffic @math street and let @math represent water @math riverbed . The transformation of @math , @math , @math , and @math to @math , @math , @math , and @math reinforces the connection between traffic and street and between water and riverbed , but it does not help us recognize the relational similarity between traffic @math street and water @math riverbed . Of course, these models were not designed for relational similarity, so this is not surprising. However, the goal here is to find a unified model that can handle both compositions and relations.
|
{
"cite_N": [
"@cite_35"
],
"mid": [
"2110616675"
],
"abstract": [
"We present a syntactically enriched vector model that supports the computation of contextualized semantic representations in a quasi compositional fashion. It employs a systematic combination of first- and second-order context vectors. We apply our model to two different tasks and show that (i) it substantially outperforms previous work on a paraphrase ranking task, and (ii) achieves promising results on a wordsense similarity task; to our knowledge, it is the first time that an unsupervised method has been applied to this task."
]
}
|
1309.4035
|
2127002961
|
Given appropriate representations of the semantic relations between carpenter and wood and between mason and stone (for example, vectors in a vector space model), a suitable algorithm should be able to recognize that these relations are highly similar (carpenter is to wood as mason is to stone; the relations are analogous). Likewise, with representations of dog, house, and kennel, an algorithm should be able to recognize that the semantic composition of dog and house, dog house, is highly similar to kennel (dog house and kennel are synonymous). It seems that these two tasks, recognizing relations and compositions, are closely connected. However, up to now, the best models for relations are significantly different from the best models for compositions. In this paper, we introduce a dual-space model that unifies these two tasks. This model matches the performance of the best previous models for relations and compositions. The dual-space model consists of a space for measuring domain similarity and a space for measuring function similarity. Carpenter and wood share the same domain, the domain of carpentry. Mason and stone share the same domain, the domain of masonry. Carpenter and mason share the same function, the function of artisans. Wood and stone share the same function, the function of materials. In the composition dog house, kennel has some domain overlap with both dog and house (the domains of pets and buildings). The function of kennel is similar to the function of house (the function of shelters). By combining domain and function similarities in various ways, we can model relations, compositions, and other aspects of semantics.
|
Some later work with classifying semantic relations has used different algorithms, but the same underlying intuition about hierarchical similarity @cite_40 @cite_68 @cite_36 . We use a similar intuition here, since similarity in function space is closely related to hierarchical similarity, @math , as we will see later (). However, including domain space in the relational similarity measure saves us from violating Equation .
|
{
"cite_N": [
"@cite_36",
"@cite_40",
"@cite_68"
],
"mid": [
"1537146566",
"2159902885",
"170652570"
],
"abstract": [
"We study the performance of two representations of word meaning in learning noun-modifier semantic relations. One representation is based on lexical resources, in particular WordNet, the other - on a corpus. We experimented with decision trees, instance-based learning and Support Vector Machines. All these methods work well in this learning task. We report high precision, recall and F-score, and small variation in performance across several 10-fold cross-validation runs. The corpus-based method has the advantage of working with data without word-sense annotations and performs well over the baseline. The WordNet-based method, requiring word-sense annotated data, has higher precision.",
"In many types of technical texts, meaning is embedded in noun compounds. A language understanding program needs to be able to interpret these in order to ascertain sentence meaning. We explore the possibility of using an existing lexical hierarchy for the purpose of placing words from a noun compound into categories, and then using this category membership to determine the relation that holds between the nouns. In this paper we present the results of an analysis of this method on two-word noun compounds from the biomedical domain, obtaining classification accuracy of approximately 90 . Since lexical hierarchies are not necessarily ideally suited for this task, we also pose the question: how far down the hierarchy must the algorithm descend before all the terms within the subhierarchy behave uniformly with respect to the semantic relation in question? We find that the topmost levels of the hierarchy yield an accurate classification, thus providing an economic way of assigning relations to noun compounds.",
""
]
}
|
1309.4035
|
2127002961
|
Given appropriate representations of the semantic relations between carpenter and wood and between mason and stone (for example, vectors in a vector space model), a suitable algorithm should be able to recognize that these relations are highly similar (carpenter is to wood as mason is to stone; the relations are analogous). Likewise, with representations of dog, house, and kennel, an algorithm should be able to recognize that the semantic composition of dog and house, dog house, is highly similar to kennel (dog house and kennel are synonymous). It seems that these two tasks, recognizing relations and compositions, are closely connected. However, up to now, the best models for relations are significantly different from the best models for compositions. In this paper, we introduce a dual-space model that unifies these two tasks. This model matches the performance of the best previous models for relations and compositions. The dual-space model consists of a space for measuring domain similarity and a space for measuring function similarity. Carpenter and wood share the same domain, the domain of carpentry. Mason and stone share the same domain, the domain of masonry. Carpenter and mason share the same function, the function of artisans. Wood and stone share the same function, the function of materials. In the composition dog house, kennel has some domain overlap with both dog and house (the domains of pets and buildings). The function of kennel is similar to the function of house (the function of shelters). By combining domain and function similarities in various ways, we can model relations, compositions, and other aspects of semantics.
|
At first the patterns in these pair--pattern matrices were generated by hand @cite_12 @cite_51 , but later work @cite_2 used automatically generated patterns. Other authors have used variations of this technique @cite_45 @cite_31 @cite_44 @cite_48 @cite_24 . All of these models suffer from the linguistic creativity problem. Because the models are noncompositional (holistic), they cannot scale up to handle the huge number of possible pairs. Even the largest corpus cannot contain all the pairs that a human speaker might use in daily conversation.
|
{
"cite_N": [
"@cite_48",
"@cite_44",
"@cite_24",
"@cite_45",
"@cite_2",
"@cite_31",
"@cite_51",
"@cite_12"
],
"mid": [
"2164037733",
"2157302829",
"2079629183",
"22977213",
"2109830295",
"2020328542",
"2038227658",
"2102515914"
],
"abstract": [
"Measuring the similarity between semantic relations that hold among entities is an important and necessary step in various Web related tasks such as relation extraction, information retrieval and analogy detection. For example, consider the case in which a person knows a pair of entities (e.g. Google, YouTube), between which a particular relation holds (e.g. acquisition). The person is interested in retrieving other such pairs with similar relations (e.g. Microsoft, Powerset). Existing keyword-based search engines cannot be applied directly in this case because, in keyword-based search, the goal is to retrieve documents that are relevant to the words used in a query -- not necessarily to the relations implied by a pair of words. We propose a relational similarity measure, using a Web search engine, to compute the similarity between semantic relations implied by two pairs of words. Our method has three components: representing the various semantic relations that exist between a pair of words using automatically extracted lexical patterns, clustering the extracted lexical patterns to identify the different patterns that express a particular semantic relation, and measuring the similarity between semantic relations using a metric learning approach. We evaluate the proposed method in two tasks: classifying semantic relations between named entities, and solving word-analogy questions. The proposed method outperforms all baselines in a relation classification task with a statistically significant average precision score of 0.74. Moreover, it reduces the time taken by Latent Relational Analysis to process 374 word-analogy questions from 9 days to less than 6 hours, with an SAT score of 51 .",
"We present a novel framework for the discovery and representation of general semantic relationships that hold between lexical items. We propose that each such relationship can be identified with a cluster of patterns that captures this relationship. We give a fully unsupervised algorithm for pattern cluster discovery, which searches, clusters and merges highfrequency words-based patterns around randomly selected hook words. Pattern clusters can be used to extract instances of the corresponding relationships. To assess the quality of discovered relationships, we use the pattern clusters to automatically generate SAT analogy questions. We also compare to a set of known relationships, achieving very good results in both methods. The evaluation (done in both English and Russian) substantiates the premise that our pattern clusters indeed reflect relationships perceived by humans.",
"Many methods are available for computing semantic similarity between individual words, but certain NLP tasks require the comparison of word pairs. This paper presents a kernel-based framework for application to relational reasoning tasks of this kind. The model presented here combines information about two distinct types of word pair similarity: lexical similarity and relational similarity. We present an efficient and flexible technique for implementing relational similarity and show the effectiveness of combining lexical and relational models by demonstrating state-of-the-art results on a compound noun interpretation task.",
"We present a novel, simple, unsupervised method for characterizing the semantic relations that hold between nouns in noun-noun compounds. The main idea is to discover predicates that make explicit the hidden relations between the nouns. This is accomplished by writing Web search engine queries that restate the noun compound as a relative clause containing a wildcard character to be filled in with a verb. A comparison to results from the literature suggest this is a promising approach.",
"There are at least two kinds of similarity. Relational similarity is correspondence between relations, in contrast with attributional similarity, which is correspondence between attributes. When two words have a high degree of attributional similarity, we call them synonyms. When two pairs of words have a high degree of relational similarity, we say that their relations are analogous. For example, the word pair mason:stone is analogous to the pair carpenter:wood. This article introduces Latent Relational Analysis (LRA), a method for measuring relational similarity. LRA has potential applications in many areas, including information extraction, word sense disambiguation, and information retrieval. Recently the Vector Space Model (VSM) of information retrieval has been adapted to measuring relational similarity, achieving a score of 47 on a collection of 374 college-level multiple-choice word analogy questions. In the VSM approach, the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpus. LRA extends the VSM approach in three ways: (1) The patterns are derived automatically from the corpus, (2) the Singular Value Decomposition (SVD) is used to smooth the frequency data, and (3) automatically generated synonyms are used to explore variations of the word pairs. LRA achieves 56 on the 374 analogy questions, statistically equivalent to the average human score of 57 . On the related problem of classifying semantic relations, LRA achieves similar gains over the VSM.",
"The UC Berkeley team participated in the SemEval 2007 Task #4, with an approach that leverages the vast size of the Web in order to build lexically-specific features. The idea is to determine which verbs, prepositions, and conjunctions are used in sentences containing a target word pair, and to compare those to features extracted for other word pairs in order to determine which are most similar. By combining these Web features with words from the sentence context, our team was able to achieve the best results for systems of category C and third best for systems of category A.",
"We present an algorithm for learning from unlabeled text, based on the Vector Space Model (VSM) of information retrieval, that can solve verbal analogy questions of the kind found in the SAT college entrance exam. A verbal analogy has the form A:B::C:D, meaning \"A is to B as C is to D\"; for example, mason:stone::carpenter:wood. SAT analogy questions provide a word pair, A:B, and the problem is to select the most analogous word pair, C:D, from a set of five choices. The VSM algorithm correctly answers 47 of a collection of 374 college-level analogy questions (random guessing would yield 20 correct; the average college-bound senior high school student answers about 57 correctly). We motivate this research by applying it to a difficult problem in natural language processing, determining semantic relations in noun-modifier pairs. The problem is to classify a noun-modifier pair, such as \"laser printer\", according to the semantic relation between the noun (printer) and the modifier (laser). We use a supervised nearest-neighbour algorithm that assigns a class to a given noun-modifier pair by finding the most analogous noun-modifier pair in the training data. With 30 classes of semantic relations, on a collection of 600 labeled noun-modifier pairs, the learning algorithm attains an F value of 26.5 (random guessing: 3.3 ). With 5 classes of semantic relations, the F value is 43.2 (random: 20 ). The performance is state-of-the-art for both verbal analogies and noun-modifier relations.",
"Existing statistical approaches to natural language problems are very coarse approximations to the true complexity of language processing. As such, no single technique will be best for all problem instances. Many researchers are examining ensemble methods that combine the output of successful, separately developed modules to create more accurate solutions. This paper examines three merging rules for combining probability distributions: the well known mixture rule, the logarithmic rule, and a novel product rule. These rules were applied with state-of-the-art results to two problems commonly used to assess human mastery of lexical semantics|synonym questions and analogy questions. All three merging rules result in ensembles that are more accurate than any of their component modules. The dierences among the three rules are not statistically signicant, but it is suggestive that the popular mixture rule is not the best rule for either of the two problems."
]
}
|
1309.3260
|
2951381339
|
Topology control algorithms allow each node in a wireless multi-hop network to adjust the power at which it makes its transmissions and choose the set of neighbors with which it communicates directly, while preserving global goals such as connectivity or coverage. This allows each node to conserve energy and contribute to increasing the lifetime of the network. In this paper, in contrast to most previous work, we consider (i) both the energy costs of communication as well as the amount of available energy at each node, (ii) the realistic situation of varying rates of energy consumption at different nodes, and (iii) the fact that co-operation between nodes, where some nodes make a sacrifice by increasing energy consumption to help other nodes reduce their consumption, can be used to extend network lifetime. This paper introduces a new distributed topology control algorithm, called the Cooperative Topology Control with Adaptation (CTCA), based on a game-theoretic approach that maps the problem of maximizing the network's lifetime into an ordinal potential game. We prove the existence of a Nash equilibrium for the game. Our simulation results indicate that the CTCA algorithm extends the life of a network by more than 50 compared to the best previously-known algorithm. We also study the performance of the distributed CTCA algorithm in comparison to an optimal centralized algorithm as a function of the communication ranges of nodes and node density.
|
Traditional topology control algorithms such as Small Minimum-Energy Communication Network (SMECN) @cite_9 , Minimum Spanning Tree (MST) @cite_27 , DRNG @cite_35 , DLSS @cite_35 and STC @cite_25 usually start the topology control process with each node transmitting at its maximum transmission power to discover all of its neighbors. Local neighborhood and power-level information is next exchanged between neighbors. The minimum transmission power of each node such that the graph is still connected is later computed at each node without further communication between nodes. The Weighted Dynamic Topology Control (WDTC) @cite_3 algorithm improves upon the work of MST, and considers the remaining energy of each node in addition to the energy cost of communication across each pair of nodes. The algorithm, however, forces bidirectional communication between each pair of nodes and, in addition, requires periodic communication by each node at its maximum possible power level. Other related algorithms seek to offer a robust topology where the graph can stand multiple channel failures; for example, a @math -connected graph is sought in @cite_19 @cite_6 and a two-tiered network in @cite_26 .
|
{
"cite_N": [
"@cite_35",
"@cite_26",
"@cite_9",
"@cite_3",
"@cite_6",
"@cite_19",
"@cite_27",
"@cite_25"
],
"mid": [
"2100977699",
"2038464572",
"2951609386",
"2090407537",
"",
"2161396246",
"1597141356",
"2003012549"
],
"abstract": [
"Most existing topology control algorithms assume homogeneous wireless networks with uniform maximal transmission power, and cannot be directly applied to heterogeneous wireless networks where the maximal transmission power of each node may be different. We present two localized topology control algorithms for heterogeneous networks: Directed Relative Neighborhood Graph (DRNG) and Directed Local Spanning Subgraph (DLSS). In both algorithms, each node independently builds its neighbor set by adjusting the transmission power, and defines the network topology by using only local information. We prove that: 1) both DRNG and DLSS can preserve network connectivity; 2) the out-degree of any node in the resulting topology generated by DRNG or DLSS is bounded by a constant; and 3) DRNG and DLSS can preserve network bi-directionality. Simulation results indicate that DRNG and DLSS significantly outperform existing topology control algorithms for heterogeneous networks in several aspects.",
"We consider a two-tiered Wireless Sensor Network (WSN) consisting of sensor clusters deployed around strategic locations and base-stations (BSs) whose locations are relatively flexible. Within a sensor cluster, there are many small sensor nodes (SNs) that capture, encode and transmit relevant information from the designated area, and there is at least one application node (AN) that receives raw data from these SNs, creates a comprehensive local-view, and forwards the composite bit-stream toward a BS. In practice, both SN and AN are battery-powered and energy-constrained, and their node lifetimes directly affect the network lifetime of WSNs. In this paper, we focus on the topology control process for ANs and BSs, which constitute the upper tier of a two-tiered WSN. We propose approaches to maximize the topological network lifetime of the WSN, by arranging BS location and inter-AN relaying optimally. Based on an algorithm in Computational Geometry, we derive the optimal BS locations under three topological lifetime definitions according to mission criticality. In addition, by studying the intrinsic properties of WSNs, we establish the upper and lower bounds of their maximal topological lifetime. When inter-AN relaying becomes feasible and favorable, we continue to develop an optimal parallel relay allocation to further prolong the topological lifetime of the WSN. An equivalent serialized relay schedule is also obtained, so that each AN only needs to have one relay destination at any time throughout the mission. The experimental performance evaluation demonstrates the efficacy of topology control as a vital process to maximize the network lifetime of WSNs.",
"We propose a protocol that, given a communication network, computes a subnetwork such that, for every pair @math of nodes connected in the original network, there is a minimum-energy path between @math and @math in the subnetwork (where a minimum-energy path is one that allows messages to be transmitted with a minimum use of energy). The network computed by our protocol is in general a subnetwork of the one computed by the protocol given in [13]. Moreover, our protocol is computationally simpler. We demonstrate the performance improvements obtained by using the subnetwork computed by our protocol through simulation.",
"In this paper a new energy-aware weighted dynamic topology control (WDTC) algorithm is proposed to extend the lifetime of wireless network and balance the nodes’ energy consumption. The idea is that each node builds its local minimum spanning tree (MST) based on the energy-aware weighted graph and the network topology is adjusted accordingly. It was proved theoretically that the topology under WDTC algorithm could preserve the network connectivity and a sufficient condition for the degree of no more than 6 was also given. Simulation shows that WDTC algorithm can effectively prolong the network lifetime and has good topological features.",
"",
"In ad hoc wireless networks, it is crucial to minimize power consumption while maintaining key network properties. This work studies power assignments of wireless devices that minimize power while maintaining k-fault tolerance. Specifically, we require all links established by this power setting be symmetric and form a k-vertex connected subgraph of the network graph. This problem is known to be NP-hard. We show current heuristic approaches can use arbitrarily more power than the optimal solution. Hence, we seek approximation algorithms for this problem. We present three approximation algorithms. The first algorithm gives an O(kalpha)-approximation where is the best approximation factor for the related problem in wired networks (the best alpha so far is O(log k)). With a more careful analysis, we show our second (slightly more complicated) algorithm is an O(k)-approximation. Our third algorithm assumes that the edge lengths of the network graph form a metric. In this case, we present simple and practical distributed algorithms for the cases of 2- and 3-connectivity with constant approximation factors. We generalize this algorithm to obtain an O(k2c+2)-approximation for general k-connectivity (2 les c les 4 is the power attenuation exponent). Finally, we show that these approximation algorithms compare favorably with existing heuristics. We note that all algorithms presented in this paper can be used to minimize power while maintaining -edge connectivity with guaranteed approximation factors. Recently, different set of authors used the notion of k-connectivity and the results of this paper to deal with the fault-tolerance issues for static wireless network settings.",
"In this paper, we present a minimum spanning tree (MST) based topology control algorithm, called local minimum spanning tree (LMST), for wireless multi-hop networks. In this algorithm, each node builds its local minimum spanning tree independently and only keeps on-tree nodes that are one-hop away as its neighbors in the final topology. We analytically prove several important properties of LMST: (1) the topology derived under LMST preserves the network connectivity; (2) the node degree of any node in the resulting topology is bounded by 6; and (3) the topology can be transformed into one with bidirectional links (without impairing the network connectivity) after removal of all uni-directional links. These results are corroborated in the simulation study.",
"Each node in a wireless multi-hop network can adjust the power level at which it transmits and thus change the topology of the network to save energy by choosing the neighbors with which it directly communicates. Many previous algorithms for distributed topology control have assumed an ability at each node to deduce some location-based information such as the direction and the distance of its neighbor nodes with respect to itself. Such a deduction of location-based information, however, cannot be relied upon in real environments where the path loss exponents vary greatly leading to significant errors in distance estimates. Also, multipath effects may result in different signal paths with different loss characteristics, and none of these paths may be line-of-sight, making it difficult to estimate the direction of a neighboring node. In this paper, we present Step Topology Control (STC), a simple distributed topology control algorithm which reduces energy consumption while preserving the connectivity of a heterogeneous sensor network without use of any location-based information. The STC algorithm avoids the use of GPS devices and also makes no assumptions about the distance and direction between neighboring nodes. We show that the STC algorithm achieves the same or better order of communication and computational complexity when compared to other known algorithms that also preserve connectivity without the use of location-based information. We also present a detailed simulation-based comparative analysis of the energy savings and interference reduction achieved by the algorithms. The results show that, in spite of not incurring a higher communication or computational complexity, the STC algorithm performs better than other algorithms in uniform wireless environments and especially better when path loss characteristics are non-uniform."
]
}
|
1309.3260
|
2951381339
|
Topology control algorithms allow each node in a wireless multi-hop network to adjust the power at which it makes its transmissions and choose the set of neighbors with which it communicates directly, while preserving global goals such as connectivity or coverage. This allows each node to conserve energy and contribute to increasing the lifetime of the network. In this paper, in contrast to most previous work, we consider (i) both the energy costs of communication as well as the amount of available energy at each node, (ii) the realistic situation of varying rates of energy consumption at different nodes, and (iii) the fact that co-operation between nodes, where some nodes make a sacrifice by increasing energy consumption to help other nodes reduce their consumption, can be used to extend network lifetime. This paper introduces a new distributed topology control algorithm, called the Cooperative Topology Control with Adaptation (CTCA), based on a game-theoretic approach that maps the problem of maximizing the network's lifetime into an ordinal potential game. We prove the existence of a Nash equilibrium for the game. Our simulation results indicate that the CTCA algorithm extends the life of a network by more than 50 compared to the best previously-known algorithm. We also study the performance of the distributed CTCA algorithm in comparison to an optimal centralized algorithm as a function of the communication ranges of nodes and node density.
|
Another class of topology control algorithms is represented by the work reported in @cite_21 , where the authors provide a decentralized static complete-information game for power scheduling, considering both frame success rate and connectivity. Yet other approaches to increasing the lifetime of a wireless sensor network include grouping nodes into clusters to create a communication hierarchy in which nodes in a cluster communicate only with their cluster head and only cluster heads are allowed to communicate with other cluster heads or the sink node @cite_33 @cite_12 @cite_24 @cite_17 @cite_31 @cite_32 . In the work of @cite_4 , the authors tried to assign sensor nodes with different initial energy levels so that sensor nodes with high traffic load will be assigned more energy than those with smaller loads. By doing so, with the same amount of overall energy, the network's lifetime may be extended.
|
{
"cite_N": [
"@cite_4",
"@cite_33",
"@cite_21",
"@cite_32",
"@cite_24",
"@cite_31",
"@cite_12",
"@cite_17"
],
"mid": [
"2545881182",
"",
"2097941894",
"",
"1980122367",
"2143284675",
"2126379392",
"1970733942"
],
"abstract": [
"In a Wireless Sensor Network (WSN) the sensed data must be gathered and transmitted to a base station where it is further processed by end users. Since that kind of network consists of low-power nodes with limited battery power, power efficient methods must be applied for node communication and data gathering in order to achieve long network lifetimes. In such networks where in a round of communication many sensor nodes have data to send to a base station, it is very important to minimize the total energy consumed by the system so that the total network lifetime is maximized. The lifetime of such sensor network is the time until base station can receive data from all sensors in the network. In this work1, besides the conventional protocol of direct transmission or the use of dynamic routing protocols proposed in literature that potentially aggregates data, we propose an algorithm based on static routing among sensor nodes with unequal energy distribution in order to extend network lifetime and find a near-optimal node energy charge scheme that leads to both node and network lifetime prolongation. Our simulation results show that our algorithm achieves longer network lifetimes mainly because the final energy charge of each node is not uniform, while each node is free from maintaining complex route information and thus less infrastructure communication is needed.",
"",
"Wireless Heterogeneous Sensor Network (WHSN) facilitates ubiquitous information acquisition for Ambient Intelligence (AmI) systems. It is of great importance of power management and topology control for WHSN to achieve desirable network performances, such as clustering properties, connectivity and power efficiency. This paper proposes a game theoretic model of topology control to analyze the decentralized interactions among heterogeneous sensors. We study the utility function for nodes to achieve desirable frame success rate and node degree, while minimizing the power consumption. Specifically, we propose a static complete-information game formulation for power scheduling and then prove the existence of the Nash equilibrium with simultaneous move. Because the heterogeneous sensors typically react to neighboring environment based on local information and the states of sensors are evolving over time, the power-scheduling problem in WHSN is further formulated into a more realistic incomplete-information dynamic game model with sequential move. We then analyze the separating equilibrium, one of the perfect Bayesian equilibriums resulted from the dynamic game, with the sensors revealing their operational states from their actions. The sufficient and necessary conditions for the existence of separating equilibrium are derived for the dynamic Bayesian game, which provide theoretical basis to the proposed power scheduling algorithms, NEPow and BEPow. The primary contributions of this paper include applying game theory to analyze the distributed decision-making process of individual sensor nodes and to analyze the desirable utilities of heterogeneous sensor nodes. Simulations are presented to validate the proposed algorithms and the results show their ability of maintaining reliable connectivity, reducing power consumption, while achieving desirable network performances.",
"",
"Game theory has been used for decades in fields of science such as economics and biology, but recently it was used to model routing and packet forwarding in wireless ad-hoc and sensor networks. However, the clustering problem, related to self-organization of nodes into large groups, has not been studied under this framework. In this work our objective is to provide a game theoretical modeling of clustering for ad-hoc and sensor networks. The analysis is based on a non-cooperative game approach where each sensor behaves selfishly in order to conserve its energy and thus maximize its lifespan. We prove the Nash Equilibria of the game for pure and mixed strategies, the expected payoffs and the price of anarchy corresponding to these equilibria. Then, we use this analysis to formulate a clustering mechanism (which we called Clustered Routing for Selfish Sensors--CROSS), that can be applied to sensor networks in practice. Comparing this mechanism to a popular clustering technique, we show via simulations that CROSS achieves a performance similar to that of a very popular clustering algorithm.",
"A coalitional game theoretic scheme is proposed that aims at maximizing wireless sensor network lifetime under specified QoS. Employing a small number of nodes of increased computing power and lifetime called representatives, an adaptive clustering scheme is proposed where neighboring nodes form coalitions in order to increase energy efficiency at the cost of controllable data-accuracy reduction. The coalition formation is globally optimized by the representatives. The spatial correlation of the sensed phenomenon measurements is exploited to formulate a cooperation scheme that reduces drastically the number of node transmissions. The specifications regarding the accuracy of the collected data determine the extent of coalition formation. The efficiency and stability of the proposed coalitional scheme are studied through simulations.",
"Prolonged network lifetime, scalability, and load balancing are important requirements for many ad-hoc sensor network applications. Clustering sensor nodes is an effective technique for achieving these goals. In this work, we propose a new energy-efficient approach for clustering nodes in ad-hoc sensor networks. Based on this approach, we present a protocol, HEED (hybrid energy-efficient distributed clustering), that periodically selects cluster heads according to a hybrid of their residual energy and a secondary parameter, such as node proximity to its neighbors or node degree. HEED does not make any assumptions about the distribution or density of nodes, or about node capabilities, e.g., location-awareness. The clustering process terminates in O(1) iterations, and does not depend on the network topology or size. The protocol incurs low overhead in terms of processing cycles and messages exchanged. It also achieves fairly uniform cluster head distribution across the network. A careful selection of the secondary clustering parameter can balance load among cluster heads. Our simulation results demonstrate that HEED outperforms weight-based clustering protocols in terms of several cluster characteristics. We also apply our approach to a simple application to demonstrate its effectiveness in prolonging the network lifetime and supporting data aggregation.",
"This study considers an integrated topology control and routing problem in wireless sensor networks (WSNs), which are employed to gather data via use of sensors with limited energy resources. We employ a hierarchical topology and routing structure with multiple sinks and devise a topology control scheme via usable energy fraction at the sensors. We develop and examine three different mathematical models whose solutions prescribe clusterhead and sink locations and data routing from sensors to sinks in a period of a deployment cycle. We develop a heuristic solution algorithm which provides very small optimality gaps for the models. The approach utilizes two types of solution representations, a combination of multiple neighborhoods, and objective value-based cut inequalities for improving the evaluation of candidate solutions. We present extensive numerical test results and analysis of the models and the solution approach. We determine that our proposed model, which minimizes average energy usage and the range of remaining energy distribution at the sensors, captures important characteristics of topology control and routing integration in WSN design and exhibits significantly better performance than our benchmark models and a well-known protocol HEED in extending network lifetime."
]
}
|
1309.3260
|
2951381339
|
Topology control algorithms allow each node in a wireless multi-hop network to adjust the power at which it makes its transmissions and choose the set of neighbors with which it communicates directly, while preserving global goals such as connectivity or coverage. This allows each node to conserve energy and contribute to increasing the lifetime of the network. In this paper, in contrast to most previous work, we consider (i) both the energy costs of communication as well as the amount of available energy at each node, (ii) the realistic situation of varying rates of energy consumption at different nodes, and (iii) the fact that co-operation between nodes, where some nodes make a sacrifice by increasing energy consumption to help other nodes reduce their consumption, can be used to extend network lifetime. This paper introduces a new distributed topology control algorithm, called the Cooperative Topology Control with Adaptation (CTCA), based on a game-theoretic approach that maps the problem of maximizing the network's lifetime into an ordinal potential game. We prove the existence of a Nash equilibrium for the game. Our simulation results indicate that the CTCA algorithm extends the life of a network by more than 50 compared to the best previously-known algorithm. We also study the performance of the distributed CTCA algorithm in comparison to an optimal centralized algorithm as a function of the communication ranges of nodes and node density.
|
If the network's lifetime is measured in terms of how many transmissions can be made before the sensor nodes run out of energy, then maximizing the network's lifetime can be interpreted as maximizing the throughput of the network. In the work of @cite_18 , the authors studied the relationship between throughput of the network and its corresponding lifetime under an SINR model. But they focus on a specific network setting where sensor nodes' neighbors and the communication links are predetermined and the topology of the network remains constant throughout the network's lifetime.
|
{
"cite_N": [
"@cite_18"
],
"mid": [
"2121723384"
],
"abstract": [
"High throughput and lifetime are both crucial design objectives for a number of multihop wireless network applications. As these two objectives are often in conflict with each other, it naturally becomes important to identify the trade-offs between them. Several works in the literature have focused on improving one or the other, but investigating the trade-off between throughput and lifetime has received relatively less attention. We study this trade-off between the network throughput and lifetime for the case of fixed wireless networks, where link transmissions are coordinated to be conflict-free. We employ a realistic interference model based on the Signal-to-Interference-and-Noise Ratio (SINR), which is usually considered statistically sufficient to infer success or failure of wireless transmissions. Our analytical and numerical results provide several insights into the interplay between throughput, lifetime, and transmit power. Specifically, we find that with a fixed throughput requirement, lifetime is not monotonic with power-neither very low power nor very high power result in the best lifetime. We also find that, for a fixed transmit power, relaxing the throughput requirement may result in a more than proportional improvement in the lifetime for small enough relaxation factors. Taken together, our insights call for a careful balancing of objectives when designing a wireless network for high throughput and lifetime."
]
}
|
1309.3260
|
2951381339
|
Topology control algorithms allow each node in a wireless multi-hop network to adjust the power at which it makes its transmissions and choose the set of neighbors with which it communicates directly, while preserving global goals such as connectivity or coverage. This allows each node to conserve energy and contribute to increasing the lifetime of the network. In this paper, in contrast to most previous work, we consider (i) both the energy costs of communication as well as the amount of available energy at each node, (ii) the realistic situation of varying rates of energy consumption at different nodes, and (iii) the fact that co-operation between nodes, where some nodes make a sacrifice by increasing energy consumption to help other nodes reduce their consumption, can be used to extend network lifetime. This paper introduces a new distributed topology control algorithm, called the Cooperative Topology Control with Adaptation (CTCA), based on a game-theoretic approach that maps the problem of maximizing the network's lifetime into an ordinal potential game. We prove the existence of a Nash equilibrium for the game. Our simulation results indicate that the CTCA algorithm extends the life of a network by more than 50 compared to the best previously-known algorithm. We also study the performance of the distributed CTCA algorithm in comparison to an optimal centralized algorithm as a function of the communication ranges of nodes and node density.
|
A survey of topology control algorithms can be found in @cite_34 @cite_11 and a survey of the applications of game theory in wireless sensor networks can be found in @cite_8 @cite_14 .
|
{
"cite_N": [
"@cite_8",
"@cite_14",
"@cite_34",
"@cite_11"
],
"mid": [
"2010357191",
"2119548540",
"",
"2008979299"
],
"abstract": [
"Wireless sensor networks (WSNs) comprising of tiny, power-constrained nodes are gaining popularity due to their potential for use in a wide variety of environments like monitoring of environmental attributes, intrusion detection, and various military and civilian applications. While the sensing objectives of these environments are unique and application-dependent, a common performance criteria for wireless sensor networks is prolonging network lifetime while satisfying coverage and connectivity in the deployment region. Security is another important performance parameter in wireless sensor networks, where adverse and remote environments pose various kinds of threats to reliable network operation. In this paper, we look at the problems of security and energy efficiency and different formulations of these problems based on the approach of game theory. The potential applicability of WSNs to intruder detection environments also lends itself to game-theoretic formulation of these environments, where pursuit-evasion games provide a relevant framework to model detection, tracking and surveillance applications. The suitability of using game theory to study security and energy efficiency problems and pursuit-evasion scenarios using WSNs stems from the nature of strategic interactions between nodes. Approaches from game theory can be used to optimize node-level as well as network-wide performance by exploiting the distributed decision-making capabilities of WSNs. The use of game theory has proliferated, with a wide range of applications in wireless sensor networking. In the wake of this proliferation, we survey the use of game-theoretic approaches to formulate problems related to security and energy efficiency in wireless sensor networks.",
"Large-scale, self-organizing wireless sensor and mesh network deployments are being driven by recent technological developments such as The Internet of Things (IoT), Smart Grids and Smart Environment applications. Efficient use of the limited energy resources of wireless sensor network (WSN) nodes is critically important to support these advances, and application of topology control methods will have a profound impact on energy efficiency and hence battery lifetime. In this survey, we focus on the energy efficiency issue and present a comprehensive study of topology control techniques for extending the lifetime of battery powered WSNs. First, we review the significant topology control algorithms to provide insights into how energy efficiency is achieved by design. Further, these algorithms are classified according to the energy conservation approach they adopt, and evaluated by the trade-offs they offer to aid designers in selecting a technique that best suits their applications. Since the concept of \"network lifetime\" is widely used for assessing the algorithms' performance, we highlight various definitions of the term and discuss their merits and drawbacks. Recently, there has been growing interest in algorithms for non-planar topologies such as deployments in underwater environments or multi-level buildings. For this reason, we also include a detailed discussion of topology control algorithms that work efficiently in three dimensions. Based on the outcomes of our review, we identify a number of open research issues for achieving energy efficiency through topology control.",
"",
"Topology Control (TC) is one of the most important techniques used in wireless ad hoc and sensor networks to reduce energy consumption (which is essential to extend the network operational time) and radio interference (with a positive effect on the network traffic carrying capacity). The goal of this technique is to control the topology of the graph representing the communication links between network nodes with the purpose of maintaining some global graph property (e.g., connectivity), while reducing energy consumption and or interference that are strictly related to the nodes' transmitting range. In this article, we state several problems related to topology control in wireless ad hoc and sensor networks, and we survey state-of-the-art solutions which have been proposed to tackle them. We also outline several directions for further research which we hope will motivate researchers to undertake additional studies in this field."
]
}
|
1309.3319
|
2952149250
|
The design of scalable and robust overlay topologies has been a main research subject since the very origins of peer-to-peer (p2p) computing. Today, the corresponding optimization tradeoffs are fairly well-understood, at least in the static case and from a worst-case perspective. This paper revisits the peer-to-peer topology design problem from a self-organization perspective. We initiate the study of topologies which are optimized to serve the communication demand, or even self-adjusting as demand changes. The appeal of this new paradigm lies in the opportunity to be able to go beyond the lower bounds and limitations imposed by a static, communication-oblivious, topology. For example, the goal of having short routing paths (in terms of hop count) does no longer conflict with the requirement of having low peer degrees. We propose a simple overlay topology Obst(k) which is composed of k (rooted and directed) Binary Search Trees (BSTs), where k is a parameter. We first prove some fundamental bounds on what can and cannot be achieved optimizing a topology towards a static communication pattern (a static Obst(k)). In particular, we show that the number of BSTs that constitute the overlay can have a large impact on the routing costs, and that a single additional BST may reduce the amortized communication costs from Omega(log(n)) to O(1), where n is the number of peers. Subsequently, we discuss a natural self-adjusting extension of Obst(k), in which frequently communicating partners are "splayed together".
|
We are only aware of two papers on demand-optimized or self-adjusting overlay networks: @cite_11 study an overlay supporting gossip or epidemics on a dynamic topology. In contrast to our work, their focus is on unstructured networks (e.g., lookup or routing is not supported), and there is no formal evaluation. The paper closest to ours is @cite_2 . initiate the study of self-adjusting splay BSTs and introduce the double-splay algorithm. Although their work regards a distributed scenario, it focuses on a single BST only. Our work builds upon these results and investigates the benefits of having multiple trees, which is also more realistic in the context of p2p computing.
|
{
"cite_N": [
"@cite_2",
"@cite_11"
],
"mid": [
"2131175133",
"2117664859"
],
"abstract": [
"This paper initiates the study of self-adjusting networks (or distributed data structures) whose topologies dynamically adapt to a communication pattern σ. We present a fully decentralized self-adjusting solution called SplayNet. A SplayNet is a distributed generalization of the classic splay tree concept. It ensures short paths (which can be found using local-greedy routing) between communication partners while minimizing topological rearrangements. We derive an upper bound for the amortized communication cost of a SplayNet based on empirical entropies of σ, and show that SplayNets have several interesting convergence properties. For instance, SplayNets features a provable online optimality under special requests scenarios. We also investigate the optimal static network and prove different lower bounds for the average communication cost based on graph cuts and on the empirical entropy of the communication pattern σ. From these lower bounds it follows, e.g., that SplayNets are optimal in scenarios where the requests follow a product distribution as well. Finally, this paper shows that in contrast to the Minimum Linear Arrangement problem which is generally NP-hard, the optimal static tree network can be computed in polynomial time for any guest graph, despite the exponentially large graph family. We complement our formal analysis with a small simulation study on a Facebook graph.",
"Gossip, or epidemic, protocols have emerged as a highly scalable and resilient approach to implement several application level services such as reliable multicast, data aggregation, publish-subscribe, among others. All these protocols organize nodes in an unstructured random overlay network. In many cases, it is interesting to bias the random overlay in order to optimize some efficiency criteria, for instance, to reduce the stretch of the overlay routing. In this paper, we propose X-BOT, a new protocol that allows to bias the topology of an unstructured gossip overlay network. X-BOT is completely decentralized and, unlike previous approaches, preserves several key properties of the original (nonbiased) overlay (most notably, the node degree and consequently, the overlay connectivity). Experimental results show that X-BOT can generate more efficient overlays than previous approaches independently of the underlying physical network topology."
]
}
|
1309.3319
|
2952149250
|
The design of scalable and robust overlay topologies has been a main research subject since the very origins of peer-to-peer (p2p) computing. Today, the corresponding optimization tradeoffs are fairly well-understood, at least in the static case and from a worst-case perspective. This paper revisits the peer-to-peer topology design problem from a self-organization perspective. We initiate the study of topologies which are optimized to serve the communication demand, or even self-adjusting as demand changes. The appeal of this new paradigm lies in the opportunity to be able to go beyond the lower bounds and limitations imposed by a static, communication-oblivious, topology. For example, the goal of having short routing paths (in terms of hop count) does no longer conflict with the requirement of having low peer degrees. We propose a simple overlay topology Obst(k) which is composed of k (rooted and directed) Binary Search Trees (BSTs), where k is a parameter. We first prove some fundamental bounds on what can and cannot be achieved optimizing a topology towards a static communication pattern (a static Obst(k)). In particular, we show that the number of BSTs that constitute the overlay can have a large impact on the routing costs, and that a single additional BST may reduce the amortized communication costs from Omega(log(n)) to O(1), where n is the number of peers. Subsequently, we discuss a natural self-adjusting extension of Obst(k), in which frequently communicating partners are "splayed together".
|
More generally, one may also regard geography @cite_21 or latency-aware @cite_7 p2p systems as providing a certain degree of self-adaptiveness. However, these systems are typically optimized towards more static criteria, and change less frequently. This also holds for the p2p topologies tailored towards the ISPs' infrastructures @cite_19 .
|
{
"cite_N": [
"@cite_19",
"@cite_21",
"@cite_7"
],
"mid": [
"",
"2043481910",
"2167898414"
],
"abstract": [
"",
"Existing peer-to-peer overlay approaches for location-based search have proven to be a valid alternative to client-server-based schemes. One of the key issues of the peer-to-peer approach is the high churn rate caused by joining and leaving peers. To address this problem, this paper proposes a new location-aware peer-to-peer overlay termed Geodemlia to achieve a robust and efficient location-based search. To evaluate Geodemlia, a real world workload model for peer-to-peer location-based services is derived from traces of Twitter. Using the workload model, a system parameter analysis of Geodemlia is conducted with the goal of finding a suitable parameter configuration. In addition, the scalability and robustness of Geodemlia is compared to a state-of-the-art tree-based approach by investigating the performance and costs of both overlays under an increasing number of peers, an increasing radius of area searches, an increasing level of churn as well as for different peer placement and search request schemes. The evaluation results reveal that in contrast to the tree-based approach, Geodemlia provides on average a 46 better success ratio as well as a 18 better recall at a moderate higher traffic overhead of 13 bytes s and an increased average response time of 0.2 s.",
"This paper presents the design and evaluation of Pastry, a scalable, distributed object location and routing substrate for wide-area peer-to-peer ap- plications. Pastry performs application-level routing and object location in a po- tentially very large overlay network of nodes connected via the Internet. It can be used to support a variety of peer-to-peer applications, including global data storage, data sharing, group communication and naming. Each node in the Pastry network has a unique identifier (nodeId). When presented with a message and a key, a Pastry node efficiently routes the message to the node with a nodeId that is numerically closest to the key, among all currently live Pastry nodes. Each Pastry node keeps track of its immediate neighbors in the nodeId space, and notifies applications of new node arrivals, node failures and recoveries. Pastry takes into account network locality; it seeks to minimize the distance messages travel, according to a to scalar proximity metric like the number of IP routing hops. Pastry is completely decentralized, scalable, and self-organizing; it automatically adapts to the arrival, departure and failure of nodes. Experimental results obtained with a prototype implementation on an emulated network of up to 100,000 nodes confirm Pastry's scalability and efficiency, its ability to self-organize and adapt to node failures, and its good network locality properties."
]
}
|
1309.3284
|
2953237304
|
In wireless sensor networks, a few sensor nodes end up being vulnerable to potentially rapid depletion of the battery reserves due to either their central location or just the traffic patterns generated by the application. Traditional energy management strategies, such as those which use topology control algorithms, reduce the energy consumed at each node to the minimum necessary. In this paper, we use a different approach that balances the energy consumption at each of the nodes, thus increasing the functional lifetime of the network. We propose a new distributed dynamic topology control algorithm called Energy Balanced Topology Control (EBTC) which considers the actual energy consumed for each transmission and reception to achieve the goal of an increased functional lifetime. We analyze the algorithm's computational and communication complexity and show that it is equivalent or lower in complexity to other dynamic topology control algorithms. Using an empirical model of energy consumption, we show that the EBTC algorithm increases the lifetime of a wireless sensor network by over 40 compared to the best of previously known algorithms.
|
Traditional topology control algorithms use an energy management strategy in which each sensor node reduces its transmission power from its maximum value to a relatively small quantity while maintaining the connectivity of the network. Algorithms of this class include Directed Relative Neighborhood Graph (DRNG) @cite_14 , Directed Local Spanning Subgraph (DLSS) @cite_14 , @cite_11 and @cite_2 . The topology of the network is determined at the very beginning of the network's life, and remains the same throughout the network lifetime.
|
{
"cite_N": [
"@cite_14",
"@cite_2",
"@cite_11"
],
"mid": [
"2100977699",
"2116956900",
"2003012549"
],
"abstract": [
"Most existing topology control algorithms assume homogeneous wireless networks with uniform maximal transmission power, and cannot be directly applied to heterogeneous wireless networks where the maximal transmission power of each node may be different. We present two localized topology control algorithms for heterogeneous networks: Directed Relative Neighborhood Graph (DRNG) and Directed Local Spanning Subgraph (DLSS). In both algorithms, each node independently builds its neighbor set by adjusting the transmission power, and defines the network topology by using only local information. We prove that: 1) both DRNG and DLSS can preserve network connectivity; 2) the out-degree of any node in the resulting topology generated by DRNG or DLSS is bounded by a constant; and 3) DRNG and DLSS can preserve network bi-directionality. Simulation results indicate that DRNG and DLSS significantly outperform existing topology control algorithms for heterogeneous networks in several aspects.",
"The topology of a wireless multi-hop network can be controlled by varying the transmission power at each node. In this paper, we give a detailed analysis of a cone-based distributed topology-control (CBTC) algorithm. This algorithm does not assume that nodes have GPS information available; rather it depends only on directional information. Roughly speaking, the basic idea of the algorithm is that a node u transmits with the minimum power p sub u, spl alpha required to ensure that in every cone of degree spl alpha around u, there is some node that u can reach with power p sub u, spl alpha . We show that taking spl alpha =5 spl pi 6 is a necessary and sufficient condition to guarantee that network connectivity is preserved. More precisely, if there is a path from s to t when every node communicates at maximum power then, if spl alpha spl les 5 spl pi 6, there is still a path in the smallest symmetric graph G sub spl alpha containing all edges (u,v) such that u can communicate with v using power p sub u, spl alpha . On the other hand, if spl alpha >5 spl pi 6, connectivity is not necessarily preserved. We also propose a set of optimizations that further reduce power consumption and prove that they retain network connectivity. Dynamic reconfiguration in the presence of failures and mobility is also discussed. Simulation results are presented to demonstrate the effectiveness of the algorithm and the optimizations.",
"Each node in a wireless multi-hop network can adjust the power level at which it transmits and thus change the topology of the network to save energy by choosing the neighbors with which it directly communicates. Many previous algorithms for distributed topology control have assumed an ability at each node to deduce some location-based information such as the direction and the distance of its neighbor nodes with respect to itself. Such a deduction of location-based information, however, cannot be relied upon in real environments where the path loss exponents vary greatly leading to significant errors in distance estimates. Also, multipath effects may result in different signal paths with different loss characteristics, and none of these paths may be line-of-sight, making it difficult to estimate the direction of a neighboring node. In this paper, we present Step Topology Control (STC), a simple distributed topology control algorithm which reduces energy consumption while preserving the connectivity of a heterogeneous sensor network without use of any location-based information. The STC algorithm avoids the use of GPS devices and also makes no assumptions about the distance and direction between neighboring nodes. We show that the STC algorithm achieves the same or better order of communication and computational complexity when compared to other known algorithms that also preserve connectivity without the use of location-based information. We also present a detailed simulation-based comparative analysis of the energy savings and interference reduction achieved by the algorithms. The results show that, in spite of not incurring a higher communication or computational complexity, the STC algorithm performs better than other algorithms in uniform wireless environments and especially better when path loss characteristics are non-uniform."
]
}
|
1309.3284
|
2953237304
|
In wireless sensor networks, a few sensor nodes end up being vulnerable to potentially rapid depletion of the battery reserves due to either their central location or just the traffic patterns generated by the application. Traditional energy management strategies, such as those which use topology control algorithms, reduce the energy consumed at each node to the minimum necessary. In this paper, we use a different approach that balances the energy consumption at each of the nodes, thus increasing the functional lifetime of the network. We propose a new distributed dynamic topology control algorithm called Energy Balanced Topology Control (EBTC) which considers the actual energy consumed for each transmission and reception to achieve the goal of an increased functional lifetime. We analyze the algorithm's computational and communication complexity and show that it is equivalent or lower in complexity to other dynamic topology control algorithms. Using an empirical model of energy consumption, we show that the EBTC algorithm increases the lifetime of a wireless sensor network by over 40 compared to the best of previously known algorithms.
|
But the WDTC algorithm does not allow for the fact that, for any sensor node, the energy consumed for sending and receiving a packet may be different. The energy consumed in receiving a packet is usually less than that of the energy consumed for sending the same packet @cite_7 @cite_4 @cite_15 @cite_9 . In other words, if the sender and the receiver node have the same amount of energy, the sender node will very likely to be the node that runs out of energy first. Therefore, the current energy level alone does not tell us enough to estimate the remaining lifetime of a wireless sensor node.
|
{
"cite_N": [
"@cite_9",
"@cite_15",
"@cite_4",
"@cite_7"
],
"mid": [
"1980122367",
"",
"2106665154",
"2106335692"
],
"abstract": [
"Game theory has been used for decades in fields of science such as economics and biology, but recently it was used to model routing and packet forwarding in wireless ad-hoc and sensor networks. However, the clustering problem, related to self-organization of nodes into large groups, has not been studied under this framework. In this work our objective is to provide a game theoretical modeling of clustering for ad-hoc and sensor networks. The analysis is based on a non-cooperative game approach where each sensor behaves selfishly in order to conserve its energy and thus maximize its lifespan. We prove the Nash Equilibria of the game for pure and mixed strategies, the expected payoffs and the price of anarchy corresponding to these equilibria. Then, we use this analysis to formulate a clustering mechanism (which we called Clustered Routing for Selfish Sensors--CROSS), that can be applied to sensor networks in practice. Comparing this mechanism to a popular clustering technique, we show via simulations that CROSS achieves a performance similar to that of a very popular clustering algorithm.",
"",
"In wireless sensor networks that consist of a large number of low-power, short-lived, unreliable sensors, one of the main design challenges is to obtain long system lifetime without sacrificing system original performances (sensing coverage and sensing reliability). In this paper, we propose a node-scheduling scheme, which can reduce system overall energy consumption, therefore increasing system lifetime, by identifying redundant nodes in respect of sensing coverage and then assigning them an off-duty operation mode that has lower energy consumption than the normal on-duty one. Our scheme aims to completely preserve original sensing coverage theoretically. Practically, sensing coverage degradation caused by location error, packet loss and node failure is very limited, not more than 1 as shown by our experimental results. In addition, the experimental results illustrate that certain redundancy is still guaranteed after node-scheduling, which we believe can provide enough sensing reliability in many applications. We implement the proposed scheme in NS-2 as an extension of the LEACH protocol and compare its energy consumption with the original LEACH. Simulation results exhibit noticeably longer system lifetime after introducing our scheme than before. Copyright © 2003 John Wiley & Sons, Ltd.",
"Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multi-hop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic networks, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show the LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional outing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated."
]
}
|
1309.3321
|
2951668949
|
Graphs are used to model interactions in a variety of contexts, and there is a growing need to quickly assess the structure of such graphs. Some of the most useful graph metrics are based on triangles, such as those measuring social cohesion. Algorithms to compute them can be extremely expensive, even for moderately-sized graphs with only millions of edges. Previous work has considered node and edge sampling; in contrast, we consider wedge sampling, which provides faster and more accurate approximations than competing techniques. Additionally, wedge sampling enables estimation local clustering coefficients, degree-wise clustering coefficients, uniform triangle sampling, and directed triangle counts. Our methods come with provable and practical probabilistic error estimates for all computations. We provide extensive results that show our methods are both more accurate and faster than state-of-the-art alternatives.
|
There has been significant work on enumeration of all triangles @cite_9 @cite_8 @cite_21 @cite_23 @cite_31 . Recent work by Cohen @cite_28 and Suri and Vassilvitskii @cite_38 give MapReduce implementations of these algorithms. @cite_22 give a massively parallel algorithm for computing clustering coefficients. Enumeration algorithms however, can be very expensive, since graphs even of moderate size (millions of vertices) can have an extremely large number of triangles (see, e.g., prop ). Eigenvalue trace based methods have been used by Tsourakakis @cite_24 and Avron @cite_26 to compute estimates of the total and per-degree number of triangles. However, computing eigenvalues (even just a few of them) is a compute-intensive task and quickly becomes intractable on large graphs. In our experiment, even computing the largest eigenvalue was multiple orders of magnitude slower than full enumeration.
|
{
"cite_N": [
"@cite_38",
"@cite_26",
"@cite_22",
"@cite_8",
"@cite_28",
"@cite_9",
"@cite_21",
"@cite_24",
"@cite_23",
"@cite_31"
],
"mid": [
"2101157403",
"",
"2002779084",
"",
"2019724001",
"2055245094",
"2016311778",
"2120595041",
"",
"2012720017"
],
"abstract": [
"In this paper, we examine online friendships at a network level. We focus on three structural signatures: network size, balance (triangles), and age homophily in the friendship ego-networks of 30 users of the virtual world Second Life. In relation to previous findings from studies of offline friendship networks, our results reveal that online networks are similar in age-homophily, but significantly different in size and balance.",
"",
"This article outlines the mechanism by which brokerage provides social capital. Opinion and behavior are more homogeneous within than between groups, so people connected across groups are more familiar with alternative ways of thinking and behaving. Brokerage across the structural holes between groups provides a vision of options otherwise unseen, which is the mechanism by which brokerage becomes social capital. I review evidence consistent with the hypothesis, then look at the networks around managers in a large American electronics company. The organization is rife with structural holes, and brokerage has its expected correlates. Compensation, positive performance evaluations, promotions, and good ideas are disproportionately in the hands of people whose networks span structural holes. The between-group brokers are more likely to express ideas, less likely to have ideas dismissed, and more likely to have ideas evaluated as valuable. I close with implications for creativity and structural change.",
"",
"As the size of graphs for analysis continues to grow, methods of graph processing that scale well have become increasingly important. One way to handle large datasets is to disperse them across an array of networked computers, each of which implements simple sorting and accumulating, or MapReduce, operations. This cloud computing approach offers many attractive features. If decomposing useful graph operations in terms of MapReduce cycles is possible, it provides incentive for seriously considering cloud computing. Moreover, it offers a way to handle a large graph on a single machine that can't hold the entire graph as well as enables streaming graph processing. This article examines this possibility.",
"In this paper we introduce a new simple strategy into edge-searching of a graph, which is useful to the various subgraph listing problems. Applying the strategy, we obtain the following four algorithms. The first one lists all the triangles in a graph G in @math time, where m is the number of edges of G and @math the arboricity of G. The second finds all the quadrangles in @math time. Since @math is at most three for a planar graph G, both run in linear time for a planar graph. The third lists all the complete subgraphs @math of order l in @math time. The fourth lists all the cliques in @math time per clique. All the algorithms require linear space. We also establish an upper bound on @math for a graph @math , where n is the number of vertices in G.",
"Finding, counting and or listing triangles (three vertices with three edges) in massive graphs are natural fundamental problems, which have recently received much attention because of their importance in complex network analysis. Here we provide a detailed survey of proposed main-memory solutions to these problems, in a unified way. We note that previous authors have paid surprisingly little attention to space complexity of main-memory solutions, despite its both fundamental and practical interest. We therefore detail space complexities of known algorithms and discuss their implications. We also present new algorithms which are time optimal for triangle listing and beats previous algorithms concerning space needs. They have the additional advantage of performing better on power-law graphs, which we also detail. We finally show with an experimental study that these two algorithms perform very well in practice, allowing us to handle cases which were previously out of reach.",
"How can we quickly find the number of triangles in a large graph, without actually counting them? Triangles are important for real world social networks, lying at the heart of the clustering coefficient and of the transitivity ratio. However, straight-forward and even approximate counting algorithms can be slow, trying to execute or approximate the equivalent of a 3-way database join. In this paper, we provide two algorithms, the eigentriangle for counting the total number of triangles in a graph, and the eigentrianglelocal algorithm that gives the count of triangles that contain a desired node. Additional contributions include the following: (a) We show that both algorithms achieve excellent accuracy, with up to sime 1000x faster execution time, on several, real graphs and (b) we discover two new power laws (degree-triangle and triangleparticipation laws) with surprising properties.",
"",
"Triangle listing is one of the fundamental algorithmic problems whose solution has numerous applications especially in the analysis of complex networks, such as the computation of clustering coefficient, transitivity, triangular connectivity, etc. Existing algorithms for triangle listing are mainly in-memory algorithms, whose performance cannot scale with the massive volume of today's fast growing networks. When the input graph cannot fit into main memory, triangle listing requires random disk accesses that can incur prohibitively large I O cost. Some streaming and sampling algorithms have been proposed but these are approximation algorithms. We propose an I O-efficient algorithm for triangle listing. Our algorithm is exact and avoids random disk access. Our results show that our algorithm is scalable and outperforms the state-of-the-art local triangle estimation algorithm."
]
}
|
1309.3321
|
2951668949
|
Graphs are used to model interactions in a variety of contexts, and there is a growing need to quickly assess the structure of such graphs. Some of the most useful graph metrics are based on triangles, such as those measuring social cohesion. Algorithms to compute them can be extremely expensive, even for moderately-sized graphs with only millions of edges. Previous work has considered node and edge sampling; in contrast, we consider wedge sampling, which provides faster and more accurate approximations than competing techniques. Additionally, wedge sampling enables estimation local clustering coefficients, degree-wise clustering coefficients, uniform triangle sampling, and directed triangle counts. Our methods come with provable and practical probabilistic error estimates for all computations. We provide extensive results that show our methods are both more accurate and faster than state-of-the-art alternatives.
|
Most relevant to our work are sampling mechanisms. @cite_11 started the use of sparsification methods, the most important of which is Doulion @cite_5 . This method sparsifies the graph by keeping each edge with probability @math ; counts the triangles in the sparsified graph; and multiplies this count by @math to predict the number of triangles in the original graph. Various theoretical analyses of this algorithm (and its variants) have been proposed @cite_3 @cite_7 @cite_14 . One of the main benefits of Doulion is that it reduces large graphs to smaller ones that can be loaded into memory. However, the Doulion estimate can suffer from high variance @cite_4 . Alternative sampling mechanisms have been proposed for streaming and semi-streaming algorithms @cite_27 @cite_35 @cite_10 @cite_34 . Yet, all these fast sampling methods only estimate the number of triangles and give no information about other triadic measures.
|
{
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_3",
"@cite_27",
"@cite_5",
"@cite_34",
"@cite_10",
"@cite_11"
],
"mid": [
"",
"2002205566",
"89448491",
"",
"1542957328",
"2002576896",
"2158432527",
"",
"",
""
],
"abstract": [
"",
"In this note we introduce a new randomized algorithm for counting triangles in graphs. We show that under mild conditions, the estimate of our algorithm is strongly concentrated around the true number of triangles. Specifically, let G be a graph with n vertices, t triangles and let @D be the maximum number of triangles an edge of G is contained in. Our randomized algorithm colors the vertices of G with N=1 p colors uniformly at random, counts monochromatic triangles, i.e., triangles whose vertices have the same color, and scales that count appropriately. We show that if p>=max(@Dlognt,lognt) then for any constant @e>0 our unbiased estimate T is concentrated around its expectation, i.e., Pr[|T-E[T]|>=@eE[T]]=o(1). Finally, our algorithm is amenable to being parallelized. We present a simple MapReduce implementation of our algorithm.",
"The problem of counting the number of triangles in a graph has gained importance in the last few years due to its importance in many data mining applications. Recently, Tsourkakis, et al proposed DOULION, which is based on a simple sampling idea but works very well on many of the important graphs. In this preliminary report, we show that DOULION may not be very correct on special cases of graphs and argue that it may not fulfill the main purpose of the triangle counting problem for real-world graphs. We then present improvements on DOULION and show that it works better, much better in some cases, than DOULION.",
"",
"In this paper we present an efficient triangle counting algorithm which can be adapted to the semistreaming model [12]. The key idea of our algorithm is to combine the sampling algorithm of [31,32] and the partitioning of the set of vertices into a high degree and a low degree subset respectively as in [1], treating each set appropriately. We obtain a running time (O ( m + m^ 3 2 n t ^2 ) ) and an e approximation (multiplicative error), where n is the number of vertices, m the number of edges and Δ the maximum number of triangles an edge is contained. Furthermore, we show how this algorithm can be adapted to the semistreaming model with space usage (O (m^ 1 2 n + m^ 3 2 n t ^2 ) ) and a constant number of passes (three) over the graph stream. We apply our methods in various networks with several millions of edges and we obtain excellent results. Finally, we propose a random projection based method for triangle counting and provide a sufficient condition to obtain an estimate with low variance.",
"We introduce reductions in the streaming model as a tool in the design of streaming algorithms. We develop the concept of list-efficient streaming algorithms that are essential to the design of efficient streaming algorithms through reductions.Our results include a suite of list-efficient streaming algorithms for basic statistical primitives. Using the reduction paradigm along with these tools, we design streaming algorithms for approximately counting the number of triangles in a graph presented as a stream.A specific highlight of our work is the first algorithm for the number of distinct elements in a data stream that achieves arbitrary approximation factors. (Independently, Trevisan [Tre01] has solved this problem via a different approach; our algorithm has the advantage of being list-efficient.)",
"Counting the number of triangles in a graph is a beautiful algorithmic problem which has gained importance over the last years due to its significant role in complex network analysis. Metrics frequently computed such as the clustering coefficient and the transitivity ratio involve the execution of a triangle counting algorithm. Furthermore, several interesting graph mining applications rely on computing the number of triangles in the graph of interest. In this paper, we focus on the problem of counting triangles in a graph. We propose a practical method, out of which all triangle counting algorithms can potentially benefit. Using a straightforward triangle counting algorithm as a black box, we performed 166 experiments on real-world networks and on synthetic datasets as well, where we show that our method works with high accuracy, typically more than 99 and gives significant speedups, resulting in even ≈ 130 times faster performance.",
"",
"",
""
]
}
|
1309.3321
|
2951668949
|
Graphs are used to model interactions in a variety of contexts, and there is a growing need to quickly assess the structure of such graphs. Some of the most useful graph metrics are based on triangles, such as those measuring social cohesion. Algorithms to compute them can be extremely expensive, even for moderately-sized graphs with only millions of edges. Previous work has considered node and edge sampling; in contrast, we consider wedge sampling, which provides faster and more accurate approximations than competing techniques. Additionally, wedge sampling enables estimation local clustering coefficients, degree-wise clustering coefficients, uniform triangle sampling, and directed triangle counts. Our methods come with provable and practical probabilistic error estimates for all computations. We provide extensive results that show our methods are both more accurate and faster than state-of-the-art alternatives.
|
In subsequent work by the authors of this paper, a Hadoop implementation of these techniques is given in @cite_33 , and a streaming version of the wedge sampling method is presented in @cite_2 .
|
{
"cite_N": [
"@cite_33",
"@cite_2"
],
"mid": [
"1968414620",
"2031082424"
],
"abstract": [
"Graphs and networks are used to model interactions in a variety of contexts. There is a growing need to quickly assess the characteristics of a graph in order to understand its underlying structure. Some of the most useful metrics are triangle-based and give a measure of the connectedness of mutual friends. This is often summarized in terms of clustering coefficients, which measure the likelihood that two neighbors of a node are themselves connected. Computing these measures exactly for large-scale networks is prohibitively expensive in both memory and time. However, a recent wedge-sampling algorithm has proved successful in efficiently and accurately estimating clustering coefficients. In this paper, we describe how to implement this approach in MapReduce to deal with massive graphs. We show results on publicly available networks, the largest of which is 132M nodes and 4.7B edges, as well as artificially generated networks (using the Graph500 benchmark), the largest of which has 240M nodes and 8.5B edges...",
"We design a space efficient algorithm that approximates the transitivity (global clustering coefficient) and total triangle count with only a single pass through a graph given as a stream of edges. Our procedure is based on the classic probabilistic result, the birthday paradox. When the transitivity is constant and there are more edges than wedges (common properties for social networks), we can prove that our algorithm requires O(√n) space (n is the number of vertices) to provide accurate estimates. We run a detailed set of experiments on a variety of real graphs and demonstrate that the memory requirement of the algorithm is a tiny fraction of the graph. For example, even for a graph with 200 million edges, our algorithm stores just 60,000 edges to give accurate results. Being a single pass streaming algorithm, our procedure also maintains a real-time estimate of the transitivity number of triangles of a graph, by storing a miniscule fraction of edges."
]
}
|
1309.2328
|
2952687074
|
The Partitioned Global Address Space (PGAS) programming model strikes a balance between the locality-aware, but explicit, message-passing model and the easy-to-use, but locality-agnostic, shared memory model. However, the PGAS rich memory model comes at a performance cost which can hinder its potential for scalability and performance. To contain this overhead and achieve full performance, compiler optimizations may not be sufficient and manual optimizations are typically added. This, however, can severely limit the productivity advantage. Such optimizations are usually targeted at reducing address translation overheads for shared data structures. This paper proposes a hardware architectural support for PGAS, which allows the processor to efficiently handle shared addresses. This eliminates the need for such hand-tuning, while maintaining the performance and productivity of PGAS languages. We propose to avail this hardware support to compilers by introducing new instructions to efficiently access and traverse the PGAS memory space. A prototype compiler is realized by extending the Berkeley Unified Parallel C (UPC) compiler. It allows unmodified code to use the new instructions without the user intervention, thereby creating a real productive programming environment. Two implementations are realized: the first is implemented using the full system simulator Gem5, which allows the evaluation of the performance gain. The second is implemented using a softcore processor Leon3 on an FPGA to verify the implementability and to parameterize the cost of the new hardware and its instructions. The new instructions show promising results for the NAS Parallel Benchmarks implemented in UPC. A speedup of up to 5.5x is demonstrated for unmodified and unoptimized codes. Unoptimized code performance using this hardware was shown to also surpass the performance of manually optimized code by up to 10 .
|
Along with the productivity studies, many efforts were performed in the direction of evaluating the potential of achieving performance using UPC. In @cite_2 , the authors demonstrated that hand-tuned UPC code can achieve comparable performance to, and sometimes even better than, code in MPI. @cite_11 evaluates the performance of different UPC compilers on 3 different machines: a Linux x86 cluster, an AlphaServer SC and a Cray T3E.
|
{
"cite_N": [
"@cite_11",
"@cite_2"
],
"mid": [
"2148207191",
"2119148081"
],
"abstract": [
"UPC is a parallel programming language based on the concept of partitioned shared memory. There are now several UPC compilers available and several different parallel architectures that support one or more of these compilers. This paper is the first to compare the performance of most of the currently available UPC implementations on several commonly used parallel platforms. These compilers are the GASNet UPC compiler from UC Berkeley, the v1.1 MuPC compiler from Michigan Tech, the Hewlet-Packard v2.2 compiler, and the Intrepid UPC compiler. The parallel architectures used in this study are a 16-node x86 Myrinet cluster, a 31-processor AlphaServer SC-40, and a 48-processor Cray T3E. A STREAM-like microbenchmark was developed to measure fine- and course-grained shared-memory accesses. Also measured are five NPB kernels using existing UPC implementations. These measurements and associated observations provide a snapshot of the relative performance of current UPC platforms.",
"Partitioned Global Address Space (PGAS) languages combine the programming convenience of shared memory with the locality and performance control of message passing. One such language, Unified Parallel C (UPC) is an extension of ISO C defined by a consortium that boasts multiple proprietary and open source compilers. Another PGAS language, Titanium, is a dialect of JavaTM designed for high performance scientific computation. In this paper we describe some of the highlights of two related projects, the Titanium project centered at U.C. Berkeley and the UPC project centered at Lawrence Berkeley National Laboratory. Both compilers use a source-to-source strategy that trans-lates the parallel languages to C with calls to a communication layer called GASNet. The result is portable high-performance compilers that run on a large variety of shared and distributed memory multiprocessors. Both projects combine compiler, runtime, and application efforts to demonstrate some of the performance and productivity advantages to these languages."
]
}
|
1309.2328
|
2952687074
|
The Partitioned Global Address Space (PGAS) programming model strikes a balance between the locality-aware, but explicit, message-passing model and the easy-to-use, but locality-agnostic, shared memory model. However, the PGAS rich memory model comes at a performance cost which can hinder its potential for scalability and performance. To contain this overhead and achieve full performance, compiler optimizations may not be sufficient and manual optimizations are typically added. This, however, can severely limit the productivity advantage. Such optimizations are usually targeted at reducing address translation overheads for shared data structures. This paper proposes a hardware architectural support for PGAS, which allows the processor to efficiently handle shared addresses. This eliminates the need for such hand-tuning, while maintaining the performance and productivity of PGAS languages. We propose to avail this hardware support to compilers by introducing new instructions to efficiently access and traverse the PGAS memory space. A prototype compiler is realized by extending the Berkeley Unified Parallel C (UPC) compiler. It allows unmodified code to use the new instructions without the user intervention, thereby creating a real productive programming environment. Two implementations are realized: the first is implemented using the full system simulator Gem5, which allows the evaluation of the performance gain. The second is implemented using a softcore processor Leon3 on an FPGA to verify the implementability and to parameterize the cost of the new hardware and its instructions. The new instructions show promising results for the NAS Parallel Benchmarks implemented in UPC. A speedup of up to 5.5x is demonstrated for unmodified and unoptimized codes. Unoptimized code performance using this hardware was shown to also surpass the performance of manually optimized code by up to 10 .
|
In @cite_15 , El- clearly demonstrated the overhead of the PGAS shared memory model. They proposed a framework to assess the compilers and the runtime systems capabilities to optimize such overheads. In order to solve those issues, different compiler optimizations have been researched including optimization techniques such as lookup tables: @cite_4 @cite_17 ; the reduced overhead is still significant and the methods can use a good amount of memory. Also, alternative representations for shared pointers have been implemented, for example phaseless pointers are used for shared addresses with a block size of 1 or infinity @cite_12 , this is only applicable to a few cases and still present a significant overheads.
|
{
"cite_N": [
"@cite_15",
"@cite_4",
"@cite_12",
"@cite_17"
],
"mid": [
"2054016504",
"2169080325",
"2135342109",
""
],
"abstract": [
"Unified Parallel C (UPC) is an explicit parallel extension to ISO C which follows the Partitioned Global Address Space (PGAS) programming model. UPC, therefore, combines the ability to express parallelism while exploiting locality. To do so, compilers must embody effective UPC-specific optimizations. In this paper we present a strategy for evaluating the performance of PGAS compilers. It is based on emulating possible optimizations and comparing the performance to the raw compiler performance. It will be shown that this technique uncovers missed optimization opportunities. The results also demonstrate that, with such automatic optimizations, the UPC performance will be compared favorably with other paradigms.",
"The distributed shared memory (DSM) model is designed to leverage the ease of programming of the shared memory paradigm, while enabling the high-performance by expressing locality as in the message-passing model. Experience, however, has shown that DSM programming languages, such as UPC, may be unable to deliver the expected high level of performance. Initial investigations have shown that among the major reasons is the overhead of translating from the UPC memory model to the target architecture virtual addresses space, which can be very costly. Experimental measurements have shown this overhead increasing execution time by up to three orders of magnitude. Previous work has also shown that some of this overhead can be avoided by hand-tuning, which on the other hand can significantly decrease the UPC ease of use. In addition, such tuning can only improve the performance of local shared accesses but not remote shared accesses. Therefore, a new technique that resembles the translation look aside buffers (TLBs) is proposed here. This technique, which is called the memory model translation buffer (MMTB) has been implemented in the GCC-UPC compiler using two alternative strategies, full-table (FT) and reduced-table (RT). It would be shown that the MMTB strategies can lead to a performance boost of up to 700 , enabling ease-of-programming while performing at a similar performance to hand-tuned UPC and MPI codes.",
"Unified Parallel C (UPC) is a parallel language that uses a Single Program Multiple Data (SPMD) model of parallelism within a global address space. The global address space is used to simplify programming, especially on applications with irregular data structures that lead to fine-grained sharing between threads. Recent results have shown that the performance of UPC using a commercial compiler is comparable to that of MPI [7]. In this paper we describe a portable open source compiler for UPC. Our goal is to achieve a similar performance while enabling easy porting of the compiler and runtime, and also provide a framework that allows for extensive optimizations. We identify some of the challenges in compiling UPC and use a combination of micro-benchmarks and application kernels to show that our compiler has low overhead for basic operations on shared data and is competitive, and sometimes faster than, the commercial HP compiler. We also investigate several communication optimizations, and show significant benefits by hand-optimizing the generated code.",
""
]
}
|
1309.2328
|
2952687074
|
The Partitioned Global Address Space (PGAS) programming model strikes a balance between the locality-aware, but explicit, message-passing model and the easy-to-use, but locality-agnostic, shared memory model. However, the PGAS rich memory model comes at a performance cost which can hinder its potential for scalability and performance. To contain this overhead and achieve full performance, compiler optimizations may not be sufficient and manual optimizations are typically added. This, however, can severely limit the productivity advantage. Such optimizations are usually targeted at reducing address translation overheads for shared data structures. This paper proposes a hardware architectural support for PGAS, which allows the processor to efficiently handle shared addresses. This eliminates the need for such hand-tuning, while maintaining the performance and productivity of PGAS languages. We propose to avail this hardware support to compilers by introducing new instructions to efficiently access and traverse the PGAS memory space. A prototype compiler is realized by extending the Berkeley Unified Parallel C (UPC) compiler. It allows unmodified code to use the new instructions without the user intervention, thereby creating a real productive programming environment. Two implementations are realized: the first is implemented using the full system simulator Gem5, which allows the evaluation of the performance gain. The second is implemented using a softcore processor Leon3 on an FPGA to verify the implementability and to parameterize the cost of the new hardware and its instructions. The new instructions show promising results for the NAS Parallel Benchmarks implemented in UPC. A speedup of up to 5.5x is demonstrated for unmodified and unoptimized codes. Unoptimized code performance using this hardware was shown to also surpass the performance of manually optimized code by up to 10 .
|
Multiple systems have implemented Hardware support for shared memory across a system. For example, the T3D supercomputer used a 'Support Circuitry' chip located between the processor and the local memory @cite_1 ; this chip, on top of providing functionality like message passing and synchronization, allowed the processor to access any memory location across the machine. In @cite_18 is proposed a network engine especially designed for PGAS languages: it allows network communication between nodes by mapping other nodes memory space accross the network and providing a relaxed memory consistency model best suited for PGAS. Results were only presented in terms of read write throughput and transaction rates as no PGAS applications or benchmarks were tested. This approach is complementary to our work, as it focused on the network interface for PGAS languages and this paper focuses on the shared space addressing. Combining both an efficient addressing a an efficient network interface would provide a very efficient support for PGAS; this is noted as future work.
|
{
"cite_N": [
"@cite_18",
"@cite_1"
],
"mid": [
"2092070931",
"1979286721"
],
"abstract": [
"We present a novel architecture of a communication engine for non-coherent distributed shared memory systems. The shared memory is composed by a set of nodes exporting their memory. Remote memory access is possible by forwarding local load or store transactions to remote nodes. No software layers are involved in a remote access, neither on origin or target side: a user level process can directly access remote locations without any kind of software involvement. We have implemented the architecture as an FPGA-based prototype in order to demonstrate the functionality of the complete system. This prototype also allows real world measurements in order to show the performance potential of this architecture, in particular for fine grain memory accesses like they are typically used for synchronization tasks.",
"Most recent MPP systems employ a fast microprocessor surrounded by a shell of communication and synchronization logic. The CRAY-T3D provides an elaborate shell to support global-memory access, prefetch, atomic operations, barriers, and block transfers. We provide a detailed empirical performance characterization of these primitives using micro-benchmarks and evaluate their utility in compiling for a parallel language. We have found that the raw performance of the machine is quite impressive and the most effective forms of communication are prefetch and write. Other shell provisions, such as the bulk transfer engine and the external Annex register set, are cumbersome and of little use. By evaluating the system in the context of a language implementation, we shed light on important trade-offs and pitfalls in the machine architecture."
]
}
|
1309.2676
|
2081300664
|
Compressive sampling (CoSa) has provided many methods for signal recovery of signals compressible with respect to an orthonormal basis. However, modern applications have sparked the emergence of approaches for signals not sparse in an orthonormal basis but in some arbitrary, perhaps highly overcomplete, dictionary. Recently, several "signal-space" greedy methods have been proposed to address signal recovery in this setting. However, such methods inherently rely on the existence of fast and accurate projections which allow one to identify the most relevant atoms in a dictionary for any given signal, up to a very strict accuracy. When the dictionary is highly overcomplete, no such projections are currently known; the requirements on such projections do not even hold for incoherent or well-behaved dictionaries. In this work, we provide an alternate analysis for signal space greedy methods which enforce assumptions on these projections which hold in several settings including those when the dictionary is incoherent or structurally coherent. These results align more closely with traditional results in the standard CoSa literature and improve upon previous work in the signal space setting.
|
Our work extends the work of Davenport, Needell, and Wakin @cite_15 who develop and analyze the Signal Space CoSaMP algorithm. In that work, the @math -RIP is enforced, as well as access to projections which satisfy . It is currently unknown whether there exist efficient projections which satisfy these requirements, even for well-behaved dictionaries like those that satisfy the RIP or have an incoherence property. That being said, other results on signal space methods rely on such assumptions. For example, a related work by Blumensath analyzes an algorithm which is a signal space extension of the Iterative Hard Thresholding (IHT) method @cite_29 . The model in that work utilizes a union-of-subspaces model and also assumes the @math -RIP and projections with even stronger requirements than those in .
|
{
"cite_N": [
"@cite_15",
"@cite_29"
],
"mid": [
"2952337026",
"2100875869"
],
"abstract": [
"Compressive sensing (CS) has recently emerged as a powerful framework for acquiring sparse signals. The bulk of the CS literature has focused on the case where the acquired signal has a sparse or compressible representation in an orthonormal basis. In practice, however, there are many signals that cannot be sparsely represented or approximated using an orthonormal basis, but that do have sparse representations in a redundant dictionary. Standard results in CS can sometimes be extended to handle this case provided that the dictionary is sufficiently incoherent or well-conditioned, but these approaches fail to address the case of a truly redundant or overcomplete dictionary. In this paper we describe a variant of the iterative recovery algorithm CoSaMP for this more challenging setting. We utilize the D-RIP, a condition on the sensing matrix analogous to the well-known restricted isometry property. In contrast to prior work, the method and analysis are \"signal-focused\"; that is, they are oriented around recovering the signal rather than its dictionary coefficients. Under the assumption that we have a near-optimal scheme for projecting vectors in signal space onto the model family of candidate sparse signals, we provide provable recovery guarantees. Developing a practical algorithm that can provably compute the required near-optimal projections remains a significant open problem, but we include simulation results using various heuristics that empirically exhibit superior performance to traditional recovery algorithms.",
"In this paper, we study the problem of sampling and reconstructing signals which are assumed to lie on or close to one of several subspaces of a Hilbert space. Importantly, we here consider a very general setting in which we allow infinitely many subspaces in infinite dimensional Hilbert spaces. This general approach allows us to unify many results derived recently in areas such as compressed sensing, affine rank minimization, analog compressed sensing and structured matrix decompositions."
]
}
|
1309.2676
|
2081300664
|
Compressive sampling (CoSa) has provided many methods for signal recovery of signals compressible with respect to an orthonormal basis. However, modern applications have sparked the emergence of approaches for signals not sparse in an orthonormal basis but in some arbitrary, perhaps highly overcomplete, dictionary. Recently, several "signal-space" greedy methods have been proposed to address signal recovery in this setting. However, such methods inherently rely on the existence of fast and accurate projections which allow one to identify the most relevant atoms in a dictionary for any given signal, up to a very strict accuracy. When the dictionary is highly overcomplete, no such projections are currently known; the requirements on such projections do not even hold for incoherent or well-behaved dictionaries. In this work, we provide an alternate analysis for signal space greedy methods which enforce assumptions on these projections which hold in several settings including those when the dictionary is incoherent or structurally coherent. These results align more closely with traditional results in the standard CoSa literature and improve upon previous work in the signal space setting.
|
These types of projections also appear in compressive sensing, where such operators project onto a specified model set. The model may describe structured sparsity patterns like tree-like or block sparsity, or may be a more general mode. In this setting, signal recovery is performed by first reconstructing the coefficient vector, and then mapping to signal space. When the dictionary @math is an orthonormal basis, greedy methods have been adapted to structured sparsity models @cite_30 . The assumptions, however, nearly require the product @math to satisfy the traditional RIP, and so extensions to non-orthonormal dictionaries serve to be difficult. Although our work differs in its assumptions and domain model, model-based methods inspired the development of signal space CoSaMP @cite_15 @cite_2 .
|
{
"cite_N": [
"@cite_30",
"@cite_15",
"@cite_2"
],
"mid": [
"2406740263",
"2952337026",
""
],
"abstract": [
"The goal of sparse recovery is to recover a k-sparse signal x e Rn from (possibly noisy) linear measurements of the form y = Ax, where A e Rmxn describes the measurement process. Standard results in compressive sensing show that it is possible to recover the signal x from m = O(k log(n k)) measurements, and that this bound is tight. The framework of model-based compressive sensing [BCDH10] overcomes the lower bound and reduces the number of measurements further to O(k) by limiting the supports of x to a subset M of the (nk) possible supports. This has led to many measurement-efficient algorithms for a wide variety of signal models, including block-sparsity and tree-sparsity. Unfortunately, extending the framework to other, more general models has been stymied by the following obstacle: for the framework to apply, one needs an algorithm that, given a signal x, solves the following optimization problem exactly: [EQUATION] (here x[n] denotes the projection of x on coordinates not in Ω). However, an approximation algorithm for this optimization task is not sufficient. Since many problems of this form are not known to have exact polynomial-time algorithms, this requirement poses an obstacle for extending the framework to a richer class of models. In this paper, we remove this obstacle and show how to extend the model-based compressive sensing framework so that it requires only approximate solutions to the aforementioned optimization problems. Interestingly, our extension requires the existence of approximation algorithms for both the maximization and the minimization variants of the optimization problem. Further, we apply our framework to the Constrained Earth Mover's Distance (CEMD) model introduced in [SHI13], obtaining a sparse recovery scheme that uses significantly less than O(k log(n k)) measurements. This is the first non-trivial theoretical bound for this model, since the validation of the approach presented in [SHI13] was purely empirical. The result is obtained by designing a novel approximation algorithm for the maximization version of the problem and proving approximation guarantees for the minimization algorithm described in [SHI13].",
"Compressive sensing (CS) has recently emerged as a powerful framework for acquiring sparse signals. The bulk of the CS literature has focused on the case where the acquired signal has a sparse or compressible representation in an orthonormal basis. In practice, however, there are many signals that cannot be sparsely represented or approximated using an orthonormal basis, but that do have sparse representations in a redundant dictionary. Standard results in CS can sometimes be extended to handle this case provided that the dictionary is sufficiently incoherent or well-conditioned, but these approaches fail to address the case of a truly redundant or overcomplete dictionary. In this paper we describe a variant of the iterative recovery algorithm CoSaMP for this more challenging setting. We utilize the D-RIP, a condition on the sensing matrix analogous to the well-known restricted isometry property. In contrast to prior work, the method and analysis are \"signal-focused\"; that is, they are oriented around recovering the signal rather than its dictionary coefficients. Under the assumption that we have a near-optimal scheme for projecting vectors in signal space onto the model family of candidate sparse signals, we provide provable recovery guarantees. Developing a practical algorithm that can provably compute the required near-optimal projections remains a significant open problem, but we include simulation results using various heuristics that empirically exhibit superior performance to traditional recovery algorithms.",
""
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.