aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1012.3951
1988014332
Maximally stable component detection is a very popular method for feature analysis in images, mainly due to its low computation cost and high repeatability. With the recent advance of feature-based methods in geometric shape analysis, there is significant interest in finding analogous approaches in the 3D world. In this paper, we formulate a diffusion-geometric framework for stable component detection in non-rigid 3D shapes, which can be used for geometric feature detection and description. A quantitative evaluation of our method on the SHREC'10 feature detection benchmark shows its potential as a source of high-quality features.
A different class of feature detection methods tries to find stable components or regions in the analyzed image or shape. In the image processing literature, the watershed transform is the precursor of many algorithms for stable component detection @cite_34 @cite_18 . In the computer vision and image analysis community, stable component detection is used in the maximally stable extremal regions (MSER) algorithm @cite_15 . MSER represents intensity level sets as a component tree and attempts finding level sets with the smallest area variation across intensity; the use of area ratio as the stability criterion makes this approach affine-invariant, which is an important property in image analysis, as it approximates viewpoint transformations. Alternative stability criteria based on geometric scale-space analysis have been recently proposed in @cite_29 .
{ "cite_N": [ "@cite_29", "@cite_18", "@cite_34", "@cite_15" ], "mid": [ "2149020196", "2124260943", "2159329531", "2124404372" ], "abstract": [ "Detection and description of affine-invariant features is a cornerstone component in numerous computer vision applications. In this note, we analyze the notion of maximally stable extremal regions (MSERs) through the prism of the curvature scale space, and conclude that in its original definition, MSER prefers regular (round) regions. Arguing that interesting features in natural images usually have irregular shapes, we propose alternative definitions of MSER which are free of this bias, yet maintain their invariance properties.", "A fast and flexible algorithm for computing watersheds in digital gray-scale images is introduced. A review of watersheds and related motion is first presented, and the major methods to determine watersheds are discussed. The algorithm is based on an immersion process analogy, in which the flooding of the water in the picture is efficiently simulated using of queue of pixel. It is described in detail provided in a pseudo C language. The accuracy of this algorithm is proven to be superior to that of the existing implementations, and it is shown that its adaptation to any kind of digital grid and its generalization to n-dimensional images (and even to graphs) are straightforward. The algorithm is reported to be faster than any other watershed algorithm. Applications of this algorithm with regard to picture segmentation are presented for magnetic resonance (MR) imagery and for digital elevation models. An example of 3-D watershed is also provided. >", "We propose an original approach to the watershed problem, based on topology. We introduce a 1D topology for grayscale images, and more generally for weighted graphs. This topology allows us to precisely define a topological grayscale transformation that generalizes the action of a watershed transformation. Furthermore, we propose an efficient algorithm to compute this topological grayscale transformation,a nd we give an example of application to image segmentation.© (1997) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.", "Abstract The wide-baseline stereo problem, i.e. the problem of establishing correspondences between a pair of images taken from different viewpoints is studied. A new set of image elements that are put into correspondence, the so called extremal regions , is introduced. Extremal regions possess highly desirable properties: the set is closed under (1) continuous (and thus projective) transformation of image coordinates and (2) monotonic transformation of image intensities. An efficient (near linear complexity) and practically fast detection algorithm (near frame rate) is presented for an affinely invariant stable subset of extremal regions, the maximally stable extremal regions (MSER). A new robust similarity measure for establishing tentative correspondences is proposed. The robustness ensures that invariants from multiple measurement regions (regions obtained by invariant constructions from extremal regions), some that are significantly larger (and hence discriminative) than the MSERs, may be used to establish tentative correspondences. The high utility of MSERs, multiple measurement regions and the robust metric is demonstrated in wide-baseline experiments on image pairs from both indoor and outdoor scenes. Significant change of scale (3.5×), illumination conditions, out-of-plane rotation, occlusion, locally anisotropic scale change and 3D translation of the viewpoint are all present in the test problems. Good estimates of epipolar geometry (average distance from corresponding points to the epipolar line below 0.09 of the inter-pixel distance) are obtained." ] }
1012.3951
1988014332
Maximally stable component detection is a very popular method for feature analysis in images, mainly due to its low computation cost and high repeatability. With the recent advance of feature-based methods in geometric shape analysis, there is significant interest in finding analogous approaches in the 3D world. In this paper, we formulate a diffusion-geometric framework for stable component detection in non-rigid 3D shapes, which can be used for geometric feature detection and description. A quantitative evaluation of our method on the SHREC'10 feature detection benchmark shows its potential as a source of high-quality features.
In the shape analysis community, shape decomposition into characteristic primitive elements was explored in @cite_25 . Methods similar to MSER have been explored in the works on topological persistence @cite_24 . Persistence-based clustering @cite_32 was used by Skraba @cite_6 to perform shape segmentation. In @cite_16 , Digne extended the notion of vertex-weighted component trees to meshes and proposed to detect MSER regions using the mean curvature. The approach was tested only in a qualitative way, and not evaluated as a feature detector.
{ "cite_N": [ "@cite_32", "@cite_6", "@cite_24", "@cite_16", "@cite_25" ], "mid": [ "2036865416", "2136883589", "2096736341", "", "2006527792" ], "abstract": [ "We present a clustering scheme that combines a mode-seeking phase with a cluster merging phase in the corresponding density map. While mode detection is done by a standard graph-based hill-climbing scheme, the novelty of our approach resides in its use of topological persistence to guide the merging of clusters. Our algorithm provides additional feedback in the form of a set of points in the plane, called a persistence diagram (PD), which provably reflects the prominences of the modes of the density. In practice, this feedback enables the user to choose relevant parameter values, so that under mild sampling conditions the algorithm will output the correct number of clusters, a notion that can be made formally sound within persistence theory. The algorithm only requires rough estimates of the density at the data points, and knowledge of (approximate) pairwise distances between them. It is therefore applicable in any metric space. Meanwhile, its complexity remains practical: although the size of the input distance matrix may be up to quadratic in the number of data points, a careful implementation only uses a linear amount of memory and takes barely more time to run than to read through the input. In this conference version of the paper we emphasize the experimental aspects of our work, describing the approach, giving an intuitive overview of its theoretical guarantees, discussing the choice of its parameters in practice, and demonstrating its potential in terms of applications through a series of experimental results obtained on synthetic and real-life data sets. Precise statements and proofs of our theoretical claims can be found in the full version of the paper [7].", "In this paper, we combine two ideas: persistence-based clustering and the Heat Kernel Signature (HKS) function to obtain a multi-scale isometry invariant mesh segmentation algorithm. The key advantages of this approach is that it is tunable through a few intuitive parameters and is stable under near-isometric deformations. Indeed the method comes with feedback on the stability of the number of segments in the form of a persistence diagram. There are also spatial guarantees on part of the segments. Finally, we present an extension to the method which first detects regions which are inherently unstable and segments them separately. Both approaches are reasonably scalable and come with strong guarantees. We show numerous examples and a comparison with the segmentation benchmark and the curvature function.", "We formalize a notion of topological simplification within the framework of a filtration, which is the history of a growing complex. We classify a topological change that happens during growth as either a feature or noise, depending on its life-time or persistence within the filtration. We give fast algorithms for completing persistence and experimental evidence for their speed and utility.", "", "Tools for the automatic decomposition of a surface into shape features will facilitate the editing, matching, texturing, morphing, compression and simplification of three-dimensional shapes. Different features, such as flats, limbs, tips, pits and various blending shapes that transition between them, may be characterized in terms of local curvature and other differential properties of the surface or in terms of a global skelet al organization of the volume it encloses. Unfortunately, both solutions are extremely sensitive to small perturbations in surface smoothness and to quantization effects when they operate on triangulated surfaces. Thus, we propose a multi-resolution approach, which not only estimates the curvature of a vertex over neighborhoods of variable size, but also takes into account the topology of the surface in that neighborhood. Our approach is based on blowing a spherical bubble at each vertex and studying how the intersection of that bubble with the surface evolves. We describe an efficient approach for computing these characteristics for a sampled set of bubble radii and for using them to identify features, based on easily formulated filters, that may capture the needs of a particular application." ] }
1012.2509
2952096656
In this paper, we study sparsity-exploiting Mastermind algorithms for attacking the privacy of an entire database of character strings or vectors, such as DNA strings, movie ratings, or social network friendship data. Based on reductions to nonadaptive group testing, our methods are able to take advantage of minimal amounts of privacy leakage, such as contained in a single bit that indicates if two people in a medical database have any common genetic mutations, or if two people have any common friends in an online social network. We analyze our Mastermind attack algorithms using theoretical characterizations that provide sublinear bounds on the number of queries needed to clone the database, as well as experimental tests on genomic information, collaborative filtering data, and online social networks. By taking advantage of the generally sparse nature of these real-world databases and modulating a parameter that controls query sparsity, we demonstrate that relatively few nonadaptive queries are needed to recover a large majority of each database.
Chv 'a tal @cite_25 studied the combinatorics of the general Mastermind game, showing that it can be solved in polynomial time using @math guesses. Chen @cite_20 showed how this can be improved to @math guesses and Goodrich @cite_43 showed how this bound can be improved to @math . Unfortunately, from the perspective of the cloning problem, all of these algorithms are , in that they use results of previous queries to construct future queries. Adaptive algorithms can only be used effectively for the interactions between a single pair of strings. For a sequence of queries to be used against an entire database of strings, we need a algorithm, that is, an algorithm where queries are not dependent upon answers from previous queries, which is equivalent to the codebreaker making all his guesses in advance.
{ "cite_N": [ "@cite_43", "@cite_25", "@cite_20" ], "mid": [ "2045496672", "", "1490551088" ], "abstract": [ "In this paper, we study the algorithmic complexity of the Mastermind game, where results are single-color black pegs. This differs from the usual dual-color version of the game, but better corresponds to applications in genetics. We show that it is NP-complete to determine if a sequence of single-color Mastermind results have a satisfying vector. We also show how to devise efficient algorithms for discovering a hidden vector through single-color queries. Indeed, our algorithm improves a previous method of Chvatal by almost a factor of 2.", "", "We study the problem of finding a hidden code k in the domain 1, ..., m n in the presence of an oracle which, for any x in the domain, answers a pair of numbers a(x,k) and b(x,k) such that a(x,k) is the number of components coinciding in x and k and, b(x, k) is the sum of a(x, k) and the number of components occurring in both x and k but, not at the same position. We show that ( m n )+ 2nlogn + 2n + 2 queries are sufficient to find any hidden code if m ≥ n." ] }
1012.2509
2952096656
In this paper, we study sparsity-exploiting Mastermind algorithms for attacking the privacy of an entire database of character strings or vectors, such as DNA strings, movie ratings, or social network friendship data. Based on reductions to nonadaptive group testing, our methods are able to take advantage of minimal amounts of privacy leakage, such as contained in a single bit that indicates if two people in a medical database have any common genetic mutations, or if two people have any common friends in an online social network. We analyze our Mastermind attack algorithms using theoretical characterizations that provide sublinear bounds on the number of queries needed to clone the database, as well as experimental tests on genomic information, collaborative filtering data, and online social networks. By taking advantage of the generally sparse nature of these real-world databases and modulating a parameter that controls query sparsity, we demonstrate that relatively few nonadaptive queries are needed to recover a large majority of each database.
Following a framework by Bancilhon and Spyratos @cite_31 , Deutsch and Papakonstantinou @cite_63 and Miklau and Suciu @cite_22 give related models for characterizing privacy loss in information releases from a database, which they call . In this framework, there is a secret, @math , that the data owner, Alice, is trying to protect. Attackers are allowed to ask legal queries of the database, while Alice tries to protect the information that these queries leak about @math . While this framework is related to the data-cloning attack, these two are not identical, since in the data-cloning attack there is no specifically sensitive part of the data. Instead, Alice, is trying to limit releasing too much of her data to Bob rather than protecting any specific secret. Similarly, Kantarcio g lu @cite_46 study privacy models that quantify the degree to which data mining searches expose private information, but this related privacy model is also not directly applicable to the data-cloning attack.
{ "cite_N": [ "@cite_46", "@cite_31", "@cite_22", "@cite_63" ], "mid": [ "2152131375", "128781889", "2087154854", "" ], "abstract": [ "Privacy-preserving data mining has concentrated on obtaining valid results when the input data is private. An extreme example is Secure Multiparty Computation-based methods, where only the results are revealed. However, this still leaves a potential privacy breach: Do the results themselves violate privacy? This paper explores this issue, developing a framework under which this question can be addressed. Metrics are proposed, along with analysis that those metrics are consistent in the face of apparent problems.", "This paper is concerned with protection of information in relational databases from disclosure to properly identified users. It is assumed that the only means of access to the database is through a relational query language. The objective of the paper is to formalize the notion of protection. We first describe the information content of the database by a set of propositions and their truth values. The objects to be protected are (the truth values of) certain propositions that have been declared confidential. A query violates a protected proposition if its answer modifies the knowledge of tHe user about (the truth value of) this proposition. Following this approach, we propose a model for evaluating protection systems. In this model a protection system is characterized by the type of queries it takes as its input, the type of data it can protect, the means of protection against queries (e.g. rejection or modification) and the type of protection it provides (e.g., total protection, partial protection, protection against user's inference). Some examples of the use of the model as a tool for analysis are given.", "We perform a theoretical study of the following query-view security problem: given a view V to be published, does V logically disclose information about a confidential query S? The problem is motivated by the need to manage the risk of unintended information disclosure in today's world of universal data exchange. We present a novel information-theoretic standard for query-view security. This criterion can be used to provide a precise analysis of information disclosure for a host of data exchange scenarios, including multi-party collusion and the use of outside knowledge by an adversary trying to learn privileged facts about the database. We prove a number of theoretical results for deciding security according to this standard. We also generalize our security criterion to account for prior knowledge a user or adversary may possess, and introduce techniques for measuring the magnitude of partial disclosures. We believe these results can be a foundation for practical efforts to secure data exchange frameworks, and also illuminate a nice interaction between logic and probability theory.", "" ] }
1012.2509
2952096656
In this paper, we study sparsity-exploiting Mastermind algorithms for attacking the privacy of an entire database of character strings or vectors, such as DNA strings, movie ratings, or social network friendship data. Based on reductions to nonadaptive group testing, our methods are able to take advantage of minimal amounts of privacy leakage, such as contained in a single bit that indicates if two people in a medical database have any common genetic mutations, or if two people have any common friends in an online social network. We analyze our Mastermind attack algorithms using theoretical characterizations that provide sublinear bounds on the number of queries needed to clone the database, as well as experimental tests on genomic information, collaborative filtering data, and online social networks. By taking advantage of the generally sparse nature of these real-world databases and modulating a parameter that controls query sparsity, we demonstrate that relatively few nonadaptive queries are needed to recover a large majority of each database.
There has been considerable recent work on data modification approaches that can help protect the privacy or intellectual property rights of a database by modifying its content. For example, several researchers (e.g., see @cite_1 @cite_24 @cite_34 @cite_48 @cite_19 @cite_44 ) advocate the use of data watermarking to protect data rights. In using this technique, data values are altered to make it easier, after the fact, to track when someone has stolen information from a database. Of course, by that point, the data has already been cloned. Alternatively, several other researchers (e.g., @cite_47 @cite_32 @cite_58 @cite_28 @cite_4 @cite_18 @cite_53 @cite_39 ) propose using or as methods for achieving quantifiable privacy-preservation in databases. These techniques alter data values to protect sensitive parts of the data, while still allowing for data mining activities to be performed on the database. We assume here that Alice is not interested in data modification techniques, however, for we believe that accuracy is critically important in several database applications. For example, even a single base-pair mutation in a DNA string can indicate the existence of an increased health risk.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_28", "@cite_48", "@cite_53", "@cite_1", "@cite_32", "@cite_58", "@cite_39", "@cite_24", "@cite_19", "@cite_44", "@cite_47", "@cite_34" ], "mid": [ "2142291269", "2950422334", "2038268267", "", "2046776785", "", "2119067110", "56293434", "", "1993332882", "2100913369", "", "", "2120201091" ], "abstract": [ "k-anonymization techniques have been the focus of intense research in the last few years. An important requirement for such techniques is to ensure anonymization of data while at the same time minimizing the information loss resulting from data modifications. In this paper we propose an approach that uses the idea of clustering to minimize information loss and thus ensure good data quality. The key observation here is that data records that are naturally similar to each other should be part of the same equivalence class. We thus formulate a specific clustering problem, referred to as k-member clustering problem. We prove that this problem is NP-hard and present a greedy heuristic, the complexity of which is in O(n2). As part of our approach we develop a suitable metric to estimate the information loss introduced by generalizations, which works for both numeric and categorical data.", "After many years searching for electroweak production of top quarks, the Tevatron collider experiments have now moved from obtaining first evidence for single top quark production to an impressive array of measurements that test the standard model in several directions. This paper describes measurements of the single top quark cross sections, limits set on the CKM matrix element |Vtb|, searches for production of single top quarks produced via flavor-changing neutral currents and from heavy W-prime and H+ boson resonances, and studies of anomalous Wtb couplings. It concludes with projections for future expected significance as the analyzed datasets grow.", "The problem of finding a longest common subsequence of two strings is discussed. This problem arises in data processing applications such as comparing two files and in genetic applications such as studying molecular evolution. The difficulty of computing a longest common subsequence of two strings is examined using the decision tree model of computation, in which vertices represent “equal - unequal” comparisons. It is shown that unless a bound on the total number of distinct symbols is assumed, every solution to the problem can consume an amount of time that is proportional to the product of the lengths of the two strings. A general lower bound as a function of the ratio of alphabet size to string length is derived. The case where comparisons between symbols of the same string are forbidden is also considered and it is shown that this problem is of linear complexity for a two-symbol alphabet and quadratic for an alphabet of three or more symbols.", "", "In order to protect individuals' privacy, the technique of k-anonymization has been proposed to de-associate sensitive attributes from the corresponding identifiers. In this paper, we provide privacy-enhancing methods for creating k-anonymous tables in a distributed scenario. Specifically, we consider a setting in which there is a set of customers, each of whom has a row of a table, and a miner, who wants to mine the entire table. Our objective is to design protocols that allow the miner to obtain a k-anonymous table representing the customer data, in such a way that does not reveal any extra information that can be used to link sensitive attributes to corresponding identifiers, and without requiring a central authority who has access to all the original data. We give two different formulations of this problem, with provably private solutions. Our solutions enhance the privacy of k-anonymization in the distributed scenario by maintaining end-to-end privacy from the original customer data to the final k-anonymous results.", "", "Today's globally networked society places great demands on the dissemination and sharing of information. While in the past released information was mostly in tabular and statistical form, many situations call for the release of specific data (microdata). In order to protect the anonymity of the entities (called respondents) to which information refers, data holders often remove or encrypt explicit identifiers such as names, addresses, and phone numbers. Deidentifying data, however, provides no guarantee of anonymity. Released information often contains other data, such as race, birth date, sex, and ZIP code, that can be linked to publicly available information to reidentify respondents and inferring information that was not intended for disclosure. In this paper we address the problem of releasing microdata while safeguarding the anonymity of respondents to which the data refer. The approach is based on the definition of k-anonymity. A table provides k-anonymity if attempts to link explicitly identifying information to its content map the information to at least k entities. We illustrate how k-anonymity can be provided without compromising the integrity (or truthfulness) of the information released by using generalization and suppression techniques. We introduce the concept of minimal generalization that captures the property of the release process not distorting the data more than needed to achieve k-anonymity, and present an algorithm for the computation of such a generalization. We also discuss possible preference policies to choose among different minimal generalizations.", "", "", "", "we introduce a solution for relational database content rights protection through watermarking. Rights protection for relational data is of ever-increasing interest, especially considering areas where sensitive, valuable content is to be outsourced. A good example is a data mining application, where data is sold in pieces to parties specialized in mining it. Different avenues are available, each with its own advantages and drawbacks. Enforcement by legal means is usually ineffective in preventing theft of copyrighted works, unless augmented by a digital counterpart, for example, watermarking. While being able to handle higher level semantic constraints, such as classification preservation, our solution also addresses important attacks, such as subset selection and random and linear data changes. We introduce wmdb., a proof-of-concept implementation and its application to real-life data, namely, in watermarking the outsourced Wal-Mart sales data that we have available at our institute.", "", "", "Watermarking allows robust and unobtrusive insertion of information in a digital document. During the last few years, techniques have been proposed for watermarking relational databases or Xml documents, where information insertion must preserve a specific measure on data (for example the mean and variance of numerical attributes). In this article we investigate the problem of watermarking databases or Xml while preserving a set of parametric queries in a specified language, up to an acceptable distortion. We first show that unrestricted databases can not be watermarked while preserving trivial parametric queries. We then exhibit query languages and classes of structures that allow guaranteed watermarking capacity, namely 1) local query languages on structures with bounded degree Gaifman graph, and 2) monadic second-order queries on trees or treelike structures. We relate these results to an important topic in computational learning theory, the VC-dimension. We finally consider incremental aspects of query-preserving watermarking." ] }
1012.2509
2952096656
In this paper, we study sparsity-exploiting Mastermind algorithms for attacking the privacy of an entire database of character strings or vectors, such as DNA strings, movie ratings, or social network friendship data. Based on reductions to nonadaptive group testing, our methods are able to take advantage of minimal amounts of privacy leakage, such as contained in a single bit that indicates if two people in a medical database have any common genetic mutations, or if two people have any common friends in an online social network. We analyze our Mastermind attack algorithms using theoretical characterizations that provide sublinear bounds on the number of queries needed to clone the database, as well as experimental tests on genomic information, collaborative filtering data, and online social networks. By taking advantage of the generally sparse nature of these real-world databases and modulating a parameter that controls query sparsity, we demonstrate that relatively few nonadaptive queries are needed to recover a large majority of each database.
As mentioned above, we allow for the queries Bob asks to be answered using SMC protocols, which reveal no additional information between the query string @math and each database string @math other than the response score @math . Such protocols have been developed for the kinds of comparisons that are done in genomic sequences (e.g., see @cite_16 @cite_26 @cite_54 ). In particular, Atallah @cite_16 and Atallah and Li @cite_64 studied privacy-preserving protocols for edit-distance sequence comparisons, such as in the longest common subsequence (LCS) problem (e.g., @cite_17 @cite_59 @cite_62 ). Troncoso-Pastoriza @cite_13 described a privacy-preserving protocol for regular-expression searching in a DNA sequence. Jha @cite_21 give privacy-preserving protocols for computing edit distance and Smith-Waterman similarity scores between two genomic sequences, improving the privacy-preserving algorithm of Szajda @cite_27 . Aligned matching results between two strings can be done in a privacy-preserving manner, as well, using privacy-preserving set intersection protocols (e.g., see @cite_11 @cite_54 @cite_37 @cite_51 @cite_8 ) or SMC methods for dot products (e.g., see @cite_3 @cite_29 @cite_15 ). In addition, the Fairplay system @cite_0 provides a general compiler for building such computations.
{ "cite_N": [ "@cite_37", "@cite_64", "@cite_26", "@cite_62", "@cite_11", "@cite_8", "@cite_54", "@cite_21", "@cite_29", "@cite_3", "@cite_0", "@cite_27", "@cite_59", "@cite_51", "@cite_15", "@cite_16", "@cite_13", "@cite_17" ], "mid": [ "2145341289", "2005991258", "2016267457", "", "", "1597844714", "2143087446", "2166971704", "2027471022", "", "1984680759", "309548903", "", "2145489953", "", "2165785801", "2085801087", "2165654401" ], "abstract": [ "There has been concern over the apparent conflict between privacy and data mining. There is no inherent conflict, as most types of data mining produce summary results that do not reveal information about individuals. The process of data mining may use private data, leading to the potential for privacy breaches. Secure Multiparty Computation shows that results can be produced without revealing the data used to generate them. The problem is that general techniques for secure multiparty computation do not scale to data-mining size computations. This paper presents an efficient protocol for securely determining the size of set intersection, and shows how this can be used to generate association rules where multiple parties have different (and private) information about the same set of individuals.", "Internet computing technologies, like grid computing, enable a weak computational device connected to such a grid to be less limited by its inadequate local computational, storage, and bandwidth resources. However, such a weak computational device (PDA, smartcard, sensor, etc.) often cannot avail itself of the abundant resources available on the network because its data are sensitive. This motivates the design of techniques for computational outsourcing in a privacy-preserving manner, i.e., without revealing to the remote agents whose computational power is being used either one’s data or the outcome of the computation. This paper investigates such secure outsourcing for widely applicable sequence comparison problems and gives an efficient protocol for a customer to securely outsource sequence comparisons to two remote agents. The local computations done by the customer are linear in the size of the sequences, and the computational cost and amount of communication done by the external agents are close to the time complexity of the best known algorithm for solving the problem on a single machine.", "The growth of the Internet has triggered tremendous opportunities for cooperative computation, where people are jointly conducting computation tasks based on the private inputs they each supplies. These computations could occur between mutually untrusted parties, or even between competitors. For example, customers might send to a remote database queries that contain private information; two competing financial organizations might jointly invest in a project that must satisfy both organizations' private and valuable constraints, and so on. Today, to conduct such computations, one entity must usually know the inputs from all the participants; however if nobody can be trusted enough to know all the inputs, privacy will become a primary concern.This problem is referred to as Secure Multi-party Computation Problem (SMC) in the literature. Research in the SMC area has been focusing on only a limited set of specific SMC problems, while privacy concerned cooperative computations call for SMC studies in a variety of computation domains. Before we can study the problems, we need to identify and define the specific SMC problems for those computation domains. We have developed a framework to facilitate this problem-discovery task. Based on our framework, we have identified and defined a number of new SMC problems for a spectrum of computation domains. Those problems include privacy-preserving database query, privacy-preserving scientific computations, privacy-preserving intrusion detection, privacy-preserving statistical analysis, privacy-preserving geometric computations, and privacy-preserving data mining.The goal of this paper is not only to present our results, but also to serve as a guideline so other people can identify useful SMC problems in their own computation domains.", "", "", "We propose a more efficient privacy preserving set intersection protocol which improves the previously known result by a factor of O(N) in both the computation and communication complexities (N is the number of parties in the protocol). Our protocol is obtained in the malicious model, in which we assume a probabilistic polynomial-time bounded adversary actively controls a fixed set of t (t < N 2) parties. We use a (t + 1,N)-threshold version of the Boneh-Goh-Nissim (BGN) cryptosystem whose underlying group supports bilinear maps. The BGN cryptosystem is generally used in applications where the plaintext space should be small, because there is still a Discrete Logarithm (DL) problem after the decryption. In our protocol the plaintext space can be as large as bounded by the security parameter τ, and the intractability of DL problem is utilized to protect the private datasets. Based on the bilinear map, we also construct some efficient non-interactive proofs. The security of our protocol can be reduced to the common intractable problems including the random oracle, subgroup decision and discrete logarithm problems. The computation complexity of our protocol is O(NS2τ3) (S is the cardinality of each party's dataset), and the communication complexity is O(NS2τ) bits. A similar work by (2006) needs O(N2S2τ3) computation complexity and O(N2S2τ) communication complexity for the same level of correctness as ours.", "We consider the problem of computing the intersection of private datasets of two parties, where the datasets contain lists of elements taken from a large domain. This problem has many applications for online collaboration. We present protocols, based on the use of homomorphic encryption and balanced hashing, for both semi-honest and malicious environments. For lists of length k, we obtain O(k) communication overhead and O(k ln ln k) computation. The protocol for the semi-honest environment is secure in the standard model, while the protocol for the malicious environment is secure in the random oracle model. We also consider the problem of approximating the size of the intersection, show a linear lower-bound for the communication overhead of solving this problem, and provide a suitable secure protocol. Lastly, we investigate other variants of the matching problem, including extending the protocol to the multi-party setting as well as considering the problem of approximate matching.", "Many basic tasks in computational biology involve operations on individual DNA and protein sequences. These sequences, even when anonymized, are vulnerable to re-identification attacks and may reveal highly sensitive information about individuals. We present a relatively efficient, privacy-preserving implementation of fundamental genomic computations such as calculating the edit distance and Smith- Waterman similarity scores between two sequences. Our techniques are crypto graphically secure and significantly more practical than previous solutions. We evaluate our prototype implementation on sequences from the Pfam database of protein families, and demonstrate that its performance is adequate for solving real-world sequence-alignment and related problems in a privacy- preserving manner. Furthermore, our techniques have applications beyond computational biology. They can be used to obtain efficient, privacy-preserving implementations for many dynamic programming algorithms over distributed datasets.", "We present a polynomial-time algorithm that, given as a input the description of a game with incomplete information and any number of players , produces a protocol for playing the game that leaks no partial information, provided the majority of the players is honest. Our algorithm automatically solves all the multi-party protocol problems addressed in complexity-based cryptography during the last 10 years. It actually is a completeness theorem for the class of distributed protocols with honest majority. Such completeness theorem is optimal in the sense that, if the majority of the players is not honest, some protocol problems have no efficient solution [C].", "", "We present FairplayMP (for \"Fairplay Multi-Party\"), a system for secure multi-party computation. Secure computation is one of the great achievements of modern cryptography, enabling a set of untrusting parties to compute any function of their private inputs while revealing nothing but the result of the function. In a sense, FairplayMP lets the parties run a joint computation that emulates a trusted party which receives the inputs from the parties, computes the function, and privately informs the parties of their outputs. FairplayMP operates by receiving a high-level language description of a function and a configuration file describing the participating parties. The system compiles the function into a description as a Boolean circuit, and perform a distributed evaluation of the circuit while revealing nothing else. FairplayMP supplements the Fairplay system [16], which supported secure computation between two parties. The underlying protocol of FairplayMP is the Beaver-Micali-Rogaway (BMR) protocol which runs in a constant number of communication rounds (eight rounds in our implementation). We modified the BMR protocol in a novel way and considerably improved its performance by using the Ben-Or-Goldwasser-Wigderson (BGW) protocol for the purpose of constructing gate tables. We chose to use this protocol since we believe that the number of communication rounds is a major factor on the overall performance of the protocol. We conducted different experiments which measure the effect of different parameters on the performance of the system and demonstrate its scalability. (We can now tell, for example, that running a second-price auction between four bidders, using five computation players, takes about 8 seconds.)", "Volunteer distributed computations utilize spare processor cycles of personal computers that are connected to the Internet. The resulting platforms provide computational power previously available only through the use of expensive clusters or supercomputers. However, distributed computations running in untrustworthy environments raise a number of security concerns, including computation integrity and data privacy. This paper introduces a strategy for enhancing data privacy in some distributed volunteer computations, providing an important first step toward a general data privacy solution for these computations. The strategy is used to provide enhanced data privacy for the Smith-Waterman local nucleotide sequence comparison algorithm. Our modified Smith-Waterman algorithm provides reasonable performance, identifying most, and in many cases all, sequence pairs that exhibit statistically significant similarity according to the unmodified algorithm, with reasonable levels of false positives. Moreover the modified algorithm achieves a net decrease in execution time, with no increase in memory requirements. Most importantly, our scheme represents an important first step toward providing data privacy for a practical and important real-world algorithm.", "", "When datasets are distributed on different sources, finding out their intersection while preserving the privacy of the datasets is a widely required task. In this paper, we address the privacy preserving set intersection (PPSI) problem, in which each of the N parties learns no elements other than the intersection of their N private datasets. We propose an efficient protocol in the malicious model, where the adversary may control arbitrary number of parties and execute the protocol for its own benefit. A related work in [12] has a correctness probability of ( v;1)ldquo (f is the size of the encryption scheme's plaintext space), a computation complexity of' 0(N2 S2lgf) (S is the size of each party's data set). Our PPSI protocol in the malicious model has a correctness probability iquest C a -1)JV 1 plusmnmiddotd achieves a computation cost of 0 c2S2lgM) (c is the number of malicious parties and c < N eurordquo I).", "", "We give an efficient protocol for sequence comparisons of the edit-distance kind, such that neither party reveals anything about their private sequence to the other party (other than what can be inferred from the edit distance between their two sequences - which is unavoidable because computing that distance is the purpose of the protocol). The amount of communication done by our protocol is proportional to the time complexity of the best-known algorithm for performing the sequence comparison.The problem of determining the similarity between two sequences arises in a large number of applications, in particular in bioinformatics. In these application areas, the edit distance is one of the most widely used notions of sequence similarity: It is the least-cost set of insertions, deletions, and substitutions required to transform one string into the other. The generalizations of edit distance that are solved by the same kind of dynamic programming recurrence relation as the one for edit distance, cover an even wider domain of applications.", "Human Desoxyribo-Nucleic Acid (DNA) sequences offer a wealth of information that reveal, among others, predisposition to various diseases and paternity relations. The breadth and personalized nature of this information highlights the need for privacy-preserving protocols. In this paper, we present a new error-resilient privacy-preserving string searching protocol that is suitable for running private DNA queries. This protocol checks if a short template (e.g., a string that describes a mutation leading to a disease), known to one party, is present inside a DNA sequence owned by another party, accounting for possible errors and without disclosing to each party the other party's input. Each query is formulated as a regular expression over a finite alphabet and implemented as an automaton. As the main technical contribution, we provide a protocol that allows to execute any finite state machine in an oblivious manner, requiring a communication complexity which is linear both in the number of states and the length of the input string.", "The problem of finding a longest common subsequence of two strings has been solved in quadratic time and space. An algorithm is presented which will solve this problem in quadratic time and in linear space." ] }
1012.2509
2952096656
In this paper, we study sparsity-exploiting Mastermind algorithms for attacking the privacy of an entire database of character strings or vectors, such as DNA strings, movie ratings, or social network friendship data. Based on reductions to nonadaptive group testing, our methods are able to take advantage of minimal amounts of privacy leakage, such as contained in a single bit that indicates if two people in a medical database have any common genetic mutations, or if two people have any common friends in an online social network. We analyze our Mastermind attack algorithms using theoretical characterizations that provide sublinear bounds on the number of queries needed to clone the database, as well as experimental tests on genomic information, collaborative filtering data, and online social networks. By taking advantage of the generally sparse nature of these real-world databases and modulating a parameter that controls query sparsity, we demonstrate that relatively few nonadaptive queries are needed to recover a large majority of each database.
Goodrich @cite_9 studies the problem of discovering a single DNA string from a series of genomic Mastermind queries. All his methods are sequential and adaptive, however, so the only way they could be applied to the data-cloning attack on an entire biological database is if Bob were to focus on each string @math in @math in turn. That is, he would have to gear his queries to specifically discovering each @math in @math distinct rounds'' of computation, each of which uses a lot of string-comparison queries. Such an adaptation of Goodrich's Mastermind attacks to perform data cloning, therefore, would be prohibitively expensive for Bob. Our approach, instead, is based on performing a nonadaptive Mastermind attack on the entire database at once.
{ "cite_N": [ "@cite_9" ], "mid": [ "2154357802" ], "abstract": [ "In this paper, we study the degree to which a genomic string, @math ,leaks details about itself any time it engages in comparison protocolswith a genomic querier, Bob, even if those protocols arecryptographically guaranteed to produce no additional information otherthan the scores that assess the degree to which @math matches stringsoffered by Bob. We show that such scenarios allow Bob to play variantsof the game of Mastermind with @math so as to learn the complete identityof @math . We show that there are a number of efficient implementationsfor Bob to employ in these Mastermind attacks, depending on knowledgehe has about the structure of @math , which show how quickly he candetermine @math . Indeed, we show that Bob can discover @math using anumber of rounds of test comparisons that is much smaller than thelength of @math , under various assumptions regarding the types of scoresthat are returned by the cryptographic protocols and whether he can useknowledge about the distribution that @math comes from, e.g., usingpublic knowledge about the properties of human DNA. We also providethe results of an experimental study we performed on a database ofmitochondrial DNA, showing the vulnerability of existing real-world DNAdata to the Mastermind attack." ] }
1012.2509
2952096656
In this paper, we study sparsity-exploiting Mastermind algorithms for attacking the privacy of an entire database of character strings or vectors, such as DNA strings, movie ratings, or social network friendship data. Based on reductions to nonadaptive group testing, our methods are able to take advantage of minimal amounts of privacy leakage, such as contained in a single bit that indicates if two people in a medical database have any common genetic mutations, or if two people have any common friends in an online social network. We analyze our Mastermind attack algorithms using theoretical characterizations that provide sublinear bounds on the number of queries needed to clone the database, as well as experimental tests on genomic information, collaborative filtering data, and online social networks. By taking advantage of the generally sparse nature of these real-world databases and modulating a parameter that controls query sparsity, we demonstrate that relatively few nonadaptive queries are needed to recover a large majority of each database.
We note that others have investigated de-anonymization techniques on both social networks @cite_23 and Netflix data @cite_30 . These works are complementary to our goal of cloning the databases themselves.
{ "cite_N": [ "@cite_30", "@cite_23" ], "mid": [ "2135930857", "2163263459" ], "abstract": [ "We present a new class of statistical de- anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information.", "In a social network, nodes correspond topeople or other social entities, and edges correspond to social links between them. In an effort to preserve privacy, the practice of anonymization replaces names with meaningless unique identifiers. We describe a family of attacks such that even from a single anonymized copy of a social network, it is possible for an adversary to learn whether edges exist or not between specific targeted pairs of nodes." ] }
1012.2152
1541803826
As sensors become ever more prevalent, more and more information will be collected about each of us. A longterm research question is how best to support beneficial uses while preserving individual privacy. Presence systems are an emerging class of applications that support collaboration. These systems leverage pervasive sensors to estimate end-user location, activities, and available communication channels. Because such presence data are sensitive, to achieve wide-spread adoption, sharing models must reflect the privacy and sharing preferences of the users. To reflect users' collaborative relationships and sharing desires, we introduce CollaPSE security, in which an individual has full access to her own data, a third party processes the data without learning anything about the data values, and users higher up in the hierarchy learn only statistical information about the employees under them. We describe simple schemes that efficiently realize CollaPSE security for time series data. We implemented these protocols using readily available cryptographic functions, and integrated the protocols with FXPAL's myUnity presence system.
@cite_8 study how to enable clinical research without giving patient records to the researchers. In their solution, caregivers, who have full access to patient records, use multiparty computation with public key homomorphic encryption to answer researcher aggregation queries.
{ "cite_N": [ "@cite_8" ], "mid": [ "2018217969" ], "abstract": [ "A recent national survey suggests that the HIPAA privacy rule has not only failed to preserve patient privacy adequately, but also has had a negative impact on clinical research. Our work suggests that researchers revisit the possibilities of homomorphic encryption and apply the techniques to secure aggregation of medical telemetry. A primary goal is to maintain the privacy of individual patient records while also allowing clinical researchers to have flexible access to aggregated information. We discuss the preliminary design of HICCUPS, a distributed system that uses homomorphic encryption to allow only the caregivers to have unrestricted access to patients' records and at the same time enable researchers to compute statistical values and aggregation functions across different patients and caregivers. In the context of processing medical telemetry, we advocate expressibility of aggregation functions more than fast computation as a primary metric of system quality." ] }
1012.2152
1541803826
As sensors become ever more prevalent, more and more information will be collected about each of us. A longterm research question is how best to support beneficial uses while preserving individual privacy. Presence systems are an emerging class of applications that support collaboration. These systems leverage pervasive sensors to estimate end-user location, activities, and available communication channels. Because such presence data are sensitive, to achieve wide-spread adoption, sharing models must reflect the privacy and sharing preferences of the users. To reflect users' collaborative relationships and sharing desires, we introduce CollaPSE security, in which an individual has full access to her own data, a third party processes the data without learning anything about the data values, and users higher up in the hierarchy learn only statistical information about the employees under them. We describe simple schemes that efficiently realize CollaPSE security for time series data. We implemented these protocols using readily available cryptographic functions, and integrated the protocols with FXPAL's myUnity presence system.
Differential privacy foils deduction of individual attributes from data such as aggregate statistics, a concern complementary to our own. In the standard setting, the differential privacy mechanism is carried out by a trusted curator who has access to all data. Rastogi and Nath @cite_9 provide differentially private aggregation of encrypted data using Paillier threshold homomorphic encryption to achieve differentially private aggregation without a trusted curator. Their decryption, unlike ours, is multiparty.
{ "cite_N": [ "@cite_9" ], "mid": [ "2104803737" ], "abstract": [ "We propose the first differentially private aggregation algorithm for distributed time-series data that offers good practical utility without any trusted server. This addresses two important challenges in participatory data-mining applications where (i) individual users collect temporally correlated time-series data (such as location traces, web history, personal health data), and (ii) an untrusted third-party aggregator wishes to run aggregate queries on the data. To ensure differential privacy for time-series data despite the presence of temporal correlation, we propose the Fourier Perturbation Algorithm (FPAk). Standard differential privacy techniques perform poorly for time-series data. To answer n queries, such techniques can result in a noise of Θ(n) to each query answer, making the answers practically useless if n is large. Our FPAk algorithm perturbs the Discrete Fourier Transform of the query answers. For answering n queries, FPAk improves the expected error from Θ(n) to roughly Θ(k) where k is the number of Fourier coefficients that can (approximately) reconstruct all the n query answers. Our experiments show that k To deal with the absence of a trusted central server, we propose the Distributed Laplace Perturbation Algorithm (DLPA) to add noise in a distributed way in order to guarantee differential privacy. To the best of our knowledge, DLPA is the first distributed differentially private algorithm that can scale with a large number of users: DLPA outperforms the only other distributed solution for differential privacy proposed so far, by reducing the computational load per user from O(U) to O(1) where U is the number of users." ] }
1012.2248
2950530509
Traditional electricity meters are replaced by Smart Meters in customers' households. Smart Meters collects fine-grained utility consumption profiles from customers, which in turn enables the introduction of dynamic, time-of-use tariffs. However, the fine-grained usage data that is compiled in this process also allows to infer the inhabitant's personal schedules and habits. We propose a privacy-preserving protocol that enables billing with time-of-use tariffs without disclosing the actual consumption profile to the supplier. Our approach relies on a zero-knowledge proof based on Pedersen Commitments performed by a plug-in privacy component that is put into the communication link between Smart Meter and supplier's back-end system. We require no changes to the Smart Meter hardware and only small changes to the software of Smart Meter and back-end system. In this paper we describe the functional and privacy requirements, the specification and security proof of our solution and give a performance evaluation of a prototypical implementation.
In @cite_6 a privacy-preserving detection algorithm for leakages in electricity distribution has been proposed. By aggregation across several Smart Meters the developed algorithm protects individual meter readings while allowing grid operators to detect illegitimate unknown load. Their approach does not allow individual billing, yet this is the main application of our paper.
{ "cite_N": [ "@cite_6" ], "mid": [ "2096219879" ], "abstract": [ "The first part of this paper discusses developments wrt. smart (electricity) meters (simply called E-meters) in general, with emphasis on security and privacy issues. The second part will be more technical and describes protocols for secure communication with E-meters and for fraud detection (leakage) in a privacy-preserving manner. Our approach uses a combination of Paillier's additive homomorphic encryption and additive secret sharing to compute the aggregated energy consumption of a given set of users." ] }
1012.2248
2950530509
Traditional electricity meters are replaced by Smart Meters in customers' households. Smart Meters collects fine-grained utility consumption profiles from customers, which in turn enables the introduction of dynamic, time-of-use tariffs. However, the fine-grained usage data that is compiled in this process also allows to infer the inhabitant's personal schedules and habits. We propose a privacy-preserving protocol that enables billing with time-of-use tariffs without disclosing the actual consumption profile to the supplier. Our approach relies on a zero-knowledge proof based on Pedersen Commitments performed by a plug-in privacy component that is put into the communication link between Smart Meter and supplier's back-end system. We require no changes to the Smart Meter hardware and only small changes to the software of Smart Meter and back-end system. In this paper we describe the functional and privacy requirements, the specification and security proof of our solution and give a performance evaluation of a prototypical implementation.
Furthermore, in @cite_0 a model for measuring privacy in Smart Metering is developed and subsequently two different solutions to privacy are presented: A Trusted Third Party-based approach, where individual consumption profiles are aggregated at the third party and only sums are communicated to the supplier. The other approach attempts to mask consumption profiles by adding randomness to the actual profile with an expectation of the random distribution of zero. In contrast to our solution, both of their approaches cannot handle billing of time-of-use tariffs but only provide either sums or not-accurate profiles. Furthermore, our approach does not require a trusted third party and provides exact results for every computation (as required by some legislations).
{ "cite_N": [ "@cite_0" ], "mid": [ "2135193968" ], "abstract": [ "Electricity suppliers have started replacing traditional electricity meters with so-called smart meters, which can transmit current power consumption levels to the supplier within short intervals. Though this is advantageous for the electricity suppliers' planning purposes, and also allows the customers a more detailed look at their usage behavior, it means a considerable risk for privacy. The detailed information can be used to judge whether persons are in the household, when they come home, which electric devices they use (e.g. when they watch TV), and so forth. In this work, we introduce the \"smart metering privacy model\" for measuring the degree of privacy that a smart metering application can provide. Moreover, we present two design solutions both with and without involvement of trusted third parties. We show that the solution with trusted party can provide \"perfect privacy\" under certain conditions." ] }
1012.2248
2950530509
Traditional electricity meters are replaced by Smart Meters in customers' households. Smart Meters collects fine-grained utility consumption profiles from customers, which in turn enables the introduction of dynamic, time-of-use tariffs. However, the fine-grained usage data that is compiled in this process also allows to infer the inhabitant's personal schedules and habits. We propose a privacy-preserving protocol that enables billing with time-of-use tariffs without disclosing the actual consumption profile to the supplier. Our approach relies on a zero-knowledge proof based on Pedersen Commitments performed by a plug-in privacy component that is put into the communication link between Smart Meter and supplier's back-end system. We require no changes to the Smart Meter hardware and only small changes to the software of Smart Meter and back-end system. In this paper we describe the functional and privacy requirements, the specification and security proof of our solution and give a performance evaluation of a prototypical implementation.
Finally, in @cite_19 also a twofold approach is presented: The first solution employs a sophisticated Trusted Platform Module (TPM) in the Smart Meter to obtain signed tariff data from the supplier and calculate a trustworthy bill. The second solution makes use of the electrical grid infrastructure as a third party to anonymize up-to-date consumption values sent out constantly by Smart Meters. Our approach can be distinguished as it only addresses billing but only requires a very simple TPM that creates commitments.
{ "cite_N": [ "@cite_19" ], "mid": [ "2278015510" ], "abstract": [ "The smart grid is seen as the greatest technological invention — in the context of energy supply — since the grid connection of millions of households. Smart metering, as the enabling technology for the future grid, attracts attention of industry, politics, and the scientific community. As smart meters are spreading into more and more households, electricity customers are directly affected by the technology. At the same time, critical voices accuse smart metering of violating customers’ privacy. Personal data are collected, allowing electricity service providers a monitoring of customers’ habits. In this paper, we propose a solution that provides anonymity for customers towards their ESP. At the same time, we do not limit the functionality that the ESP is asking for, especially the periodic reporting of customers’ current electricity consumption values. Moreover, our concept allows for the ESP to build trust in the software that is executed within their customers’ smart meters." ] }
1012.3040
3076088
The syntactic nature and compositionality characteristic of stochastic process algebras make models to be easily understood by human beings, but not convenient for machines as well as people to directly carry out mathematical analysis and stochastic simulation. This paper presents a numerical representation schema for the stochastic process algebra PEPA, which can provide a platform to directly and conveniently employ a variety of computational approaches to both qualitatively and quantitatively analyse the models. Moreover, these approaches developed on the basis of the schema are demonstrated and discussed. In particular, algorithms for automatically deriving the schema from a general PEPA model and simulating the model based on the derived schema to derive performance measures are presented.
Our work is motivated and stimulated by the pioneering work on the numerical vector form and activity matrix in @cite_1 , which was dedicated to the fluid approximation for PEPA. The P T structure underlying each PEPA model, as stated in Theorem in this paper, reveals tight connections between stochastic process algebras and stochastic Petri nets. Based on this structure and the theories developed for Petri nets, several powerful techniques for structural analysis of PEPA were presented in @cite_0 , including a structure-based deadlock-checking method which avoids the state space explosion problem. @cite_5 , a new operational semantics was proposed to give a compact symbolic representation of PEPA models. This semantics extends the application scope of the fluid approximation of PEPA by incorporating all the operators of the language and removing earlier assumptions on the syntactical structure of the models amenable to this analysis. Moreover, the paper @cite_2 shows how to derive the performance metrics such as action throughput and capacity utilisaition from the fluid approximation of a PEPA model.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_1", "@cite_2" ], "mid": [ "", "2076829994", "2111390218", "2112901936" ], "abstract": [ "", "The exact performance analysis of large-scale software systems with discrete-state approaches is difficult because of the well-known problem of state-space explosion. This paper considers this problem with regard to the stochastic process algebra PEPA, presenting a deterministic approximation to the underlying Markov chain model based on ordinary differential equations. The accuracy of the approximation is assessed by means of a substantial case study of a distributed multithreaded application.", "In this paper we present a novel performance analysis technique for large scale systems modelled in the stochastic process algebra PEPA. In contrast to the well-known approach of analysing via continuous time Markov chains, our underlying mathematical representation is a set of coupled ordinary differential equations (ODEs). This analysis process supports all of the comhinators of the PEPA algebra and is well suited to systems with large numbers of replicated components. The paper presents an elegant procedure for the generation of the ODEs and compares the results of this analysis with more conventional methods.", "Reasoning about the performance of models of software systems typically entails the derivation of metrics such as throughput, utilization, and response time. If the model is a Markov chain, these are expressed as real functions of the chain, called reward models. The computational complexity of reward-based metrics is of the same order as the solution of the Markov chain, making the analysis infeasible when evaluating large-scale systems. In the context of the stochastic process algebra PEPA, the underlying continuous-time Markov chain has been shown to admit a deterministic (fluid) approximation as a solution of an ordinary differential equation, which effectively circumvents state-space explosion. This paper is concerned with approximating Markovian reward models for PEPA with fluid rewards, i.e., functions of the solution of the differential equation problem. It shows that (1) the Markovian reward models for typical metrics of performance enjoy asymptotic convergence to their fluid analogues, and that (2) via numerical tests, the approximation yields satisfactory accuracy in practice." ] }
1012.1131
1516924274
Nowadays we are faced with an increasing popularity of social software including wikis, blogs, micro-blogs and online social networks such as Facebook and MySpace. Unfortunately, the mostly used social services are centralized and personal information is stored at a single vendor. This results in potential privacy problems as users do not have much control over how their private data is disseminated. To overcome this limitation, some recent approaches envisioned replacing the single authority centralization of services by a peer-to-peer trust-based approach where users can decide with whom they want to share their private data. In this peer-to-peer collaboration it is very difficult to ensure that after data is shared with other peers, these peers will not misbehave and violate data privacy. In this paper we propose a mechanism that addresses the issue of data privacy violation due to data disclosure to malicious peers. In our approach trust values between users are adjusted according to their previous activities on the shared data. Users share their private data by specifying some obligations the receivers must follow. We log modifications done by users on the shared data as well as the obligations that must be followed when data is shared. By a log-auditing mechanism we detect users that misbehaved and we adjust their associated trust values by using any existing decentralized trust model.
In order to return data ownership to users rather than to a third party central authority, some recent works @cite_3 @cite_15 explore the coupling between social networks and peer-to-peer systems. In this context privacy protection is understood as allowing users to encrypt their data and control access by appropriate key sharing and distribution. Our approach is complementary to this work and refers to what happens to data after it has been shared.
{ "cite_N": [ "@cite_15", "@cite_3" ], "mid": [ "2122048208", "1984599464" ], "abstract": [ "Online social networking has quickly become one of the most common Internet activities. As social networks evolve, they encourage users to share more information, requiring the users, in turn, to place more trust into social networks. In centralized systems, this means trusting a third-party commercial entity, like Facebook or MySpace. Peer-to-peer (P2P) systems can enable the creation of online social networks extending trust to friends only. In this paper, we present a novel approach to constructing completely decentralized social networks through P2P overlays, OverSoc. Our approach relies on a common directory overlay, which facilitates friend discovery and bootstraps connectivity to individualized profile overlays. Each user has their own individual profile overlay managed transparently using a public key infrastructure (PKI). We define necessary interfaces for constructing the system and describe some examples of user interactions with the system.", "To address privacy concerns over Online Social Networks (OSNs), we propose a distributed, peer-to-peer approach coupled with encryption. Moreover, extending this distributed approach by direct data exchange between user devices removes the strict Internet-connectivity requirements of web-based OSNs. In order to verify the feasibility of this approach, we designed a two-tiered architecture and protocols that recreate the core features of OSNs in a decentralized way. This paper focuses on the description of the prototype built for the P2P infrastructure for social networks, as a first step without the encryption part, and shares early experiences from the prototype and insights gained since first outlining the challenges and possibilities of decentralized alternatives to OSNs." ] }
1012.1131
1516924274
Nowadays we are faced with an increasing popularity of social software including wikis, blogs, micro-blogs and online social networks such as Facebook and MySpace. Unfortunately, the mostly used social services are centralized and personal information is stored at a single vendor. This results in potential privacy problems as users do not have much control over how their private data is disseminated. To overcome this limitation, some recent approaches envisioned replacing the single authority centralization of services by a peer-to-peer trust-based approach where users can decide with whom they want to share their private data. In this peer-to-peer collaboration it is very difficult to ensure that after data is shared with other peers, these peers will not misbehave and violate data privacy. In this paper we propose a mechanism that addresses the issue of data privacy violation due to data disclosure to malicious peers. In our approach trust values between users are adjusted according to their previous activities on the shared data. Users share their private data by specifying some obligations the receivers must follow. We log modifications done by users on the shared data as well as the obligations that must be followed when data is shared. By a log-auditing mechanism we detect users that misbehaved and we adjust their associated trust values by using any existing decentralized trust model.
Another approach that addresses data privacy violation in peer-to-peer environments is Priserv @cite_1 , a DHT privacy service that combines the Hippocratic database principles with the trust notions. Hippocratic databases enforce purpose-based privacy while reputation techniques guarantee trust notions. However, this approach focuses on a database solution, being limited to relational tables. Moreover, as opposed to our solution, the Priserv approach does not propose neither a mechanism of discovering the malicious users that do not respect the obligations required for using the data nor an approach for updating the trust values associated to users.
{ "cite_N": [ "@cite_1" ], "mid": [ "1600097936" ], "abstract": [ "P2P systems are increasingly used for efficient, scalable data sharing. Popular applications focus on massive file sharing. However, advanced applications such as online communities (e.g., medical or research communities) need to share private or sensitive data. Currently, in P2P systems, untrusted peers can easily violate data privacy by using data for malicious purposes (e.g., fraudulence, profiling). To prevent such behavior, the well accepted Hippocratic database principle states that data owners should specify the purpose for which their data will be collected. In this paper, we apply such principles as well as reputation techniques to support purpose and trust in structured P2P systems. Hippocratic databases enforce purpose-based privacy while reputation techniques guarantee trust. We propose a P2P data privacy model which combines the Hippocratic principles and the trust notions. We also present the algorithms of PriServ, a DHT-based P2P privacy service which supports this model and prevents data privacy violation. We show, in a performance evaluation, that PriServ introduces a small overhead." ] }
1012.1131
1516924274
Nowadays we are faced with an increasing popularity of social software including wikis, blogs, micro-blogs and online social networks such as Facebook and MySpace. Unfortunately, the mostly used social services are centralized and personal information is stored at a single vendor. This results in potential privacy problems as users do not have much control over how their private data is disseminated. To overcome this limitation, some recent approaches envisioned replacing the single authority centralization of services by a peer-to-peer trust-based approach where users can decide with whom they want to share their private data. In this peer-to-peer collaboration it is very difficult to ensure that after data is shared with other peers, these peers will not misbehave and violate data privacy. In this paper we propose a mechanism that addresses the issue of data privacy violation due to data disclosure to malicious peers. In our approach trust values between users are adjusted according to their previous activities on the shared data. Users share their private data by specifying some obligations the receivers must follow. We log modifications done by users on the shared data as well as the obligations that must be followed when data is shared. By a log-auditing mechanism we detect users that misbehaved and we adjust their associated trust values by using any existing decentralized trust model.
Keeping and managing event logs is frequently used for ensuring security and privacy. This approach has been studied in many works. In @cite_8 , a log auditing approach is used for detecting misbehavior in collaborative work environments, where a small group of users share a large number of documents and policies. In @cite_18 @cite_13 , a logical policy-centric for behavior-based decision-making is presented. The framework consists of a formal model of past behaviors of principals which is based on event structures. However, these models require a central authority to audit the log to help the system making decisions and this is a limitation for using these models in a fully decentralized environment.
{ "cite_N": [ "@cite_18", "@cite_13", "@cite_8" ], "mid": [ "1499405354", "2096063762", "2145820326" ], "abstract": [ "Reputation systems are meta systems that record, aggregate and distribute information about principals' behaviour in distributed applications. Similarly, history-based access control systems make decisions based on programs' past security-sensitive actions. While the applications are distinct, the two types of systems are fundamentally making decisions based on information about the past behaviour of an entity. A logical policy-centric framework for such behaviour-based decision-making is presented. In the framework, principals specify policies which state precise requirements on the past behaviour of other principals that must be fulfilled in order for interaction to take place. The framework consists of a formal model of behaviour, based on event structures; a declarative logical language for specifying properties of past behaviour; and efficient dynamic algorithms for checking whether a particular behaviour satisfies a property from the language. It is shown how the framework can be extended in several ways, most notably to encompass parameterized events and quantification over parameters. In an extended application, it is illustrated how the framework can be applied for dynamic history-based access control for safe execution of unknown and untrusted programs.", "Abstract: Log auditing is a basic intrusion detection mechanism, whereby attacks are detected by uncovering matches of sequences of events against signatures. We argue that this is naturally expressed as a model-checking problem against linear Kripke models. A variant of the classic linear time temporal logic of Manna and Pnueli with first-order variables is first investigated in this framework. But this logic is in dire need of refinement, as far as expressiveness and efficiency are concerned. We therefore propose a second, less standard logic consisting of flat, Wolper-style linear-time formulae. We describe an efficient on-line algorithm, making the approach attractive for complex log auditing tasks. We also present a few optimizations that the use of a formal semantics affords us.", "In this paper we introduce a new framework for controlling compliance to discretionary access control policies [ in Proceedings of the International Workshop on Policies for Distributed Systems and Networks (POLICY), 2005; in Proceedings of the IFIP Workshop on Formal Aspects in Security and Trust (FAST), 2004]. The framework consists of a simple policy language, modeling ownership of data and administrative policies. Users can create documents, and authorize others to process the documents. To control compliance to the document policies, we define a formal audit procedure by which users may be audited and asked to justify that an action was in compliance with a policy. In this paper we focus on the implementation of our framework. We present a formal proof system, which was only informally described in earlier work. We derive an important tractability result (a cut-elimination theorem), and we use this result to implement a proof-finder, a key component in this framework. We argue that in a number of settings, such as collaborative work environments, where a small group of users create and manage document in a decentralized way, our framework is a more flexible approach for controlling the compliance to policies." ] }
1012.1131
1516924274
Nowadays we are faced with an increasing popularity of social software including wikis, blogs, micro-blogs and online social networks such as Facebook and MySpace. Unfortunately, the mostly used social services are centralized and personal information is stored at a single vendor. This results in potential privacy problems as users do not have much control over how their private data is disseminated. To overcome this limitation, some recent approaches envisioned replacing the single authority centralization of services by a peer-to-peer trust-based approach where users can decide with whom they want to share their private data. In this peer-to-peer collaboration it is very difficult to ensure that after data is shared with other peers, these peers will not misbehave and violate data privacy. In this paper we propose a mechanism that addresses the issue of data privacy violation due to data disclosure to malicious peers. In our approach trust values between users are adjusted according to their previous activities on the shared data. Users share their private data by specifying some obligations the receivers must follow. We log modifications done by users on the shared data as well as the obligations that must be followed when data is shared. By a log-auditing mechanism we detect users that misbehaved and we adjust their associated trust values by using any existing decentralized trust model.
Trust management is an important aspect of the solution that we proposed. The concept of trust in different communities varies according to how it is computed and used. Our work relies on the concept of trust which is based on past encounters: Trust is a subjective expectation an agent has about another's future behavior based on the history of their encounters'' MuiHICSS02 . Various trust models for peer to peer systems exist such as NICE model @cite_12 , EigenTrust model @cite_5 and global trust model @cite_16 and our mechanism for discovering misbehaved users can be coupled with any existing trust model in order to manage user trust values.
{ "cite_N": [ "@cite_5", "@cite_16", "@cite_12" ], "mid": [ "2156523427", "2000368422", "2096334334" ], "abstract": [ "Peer-to-peer file-sharing networks are currently receiving much attention as a means of sharing and distributing information. However, as recent experience shows, the anonymous, open nature of these networks offers an almost ideal environment for the spread of self-replicating inauthentic files.We describe an algorithm to decrease the number of downloads of inauthentic files in a peer-to-peer file-sharing network that assigns each peer a unique global trust value, based on the peer's history of uploads. We present a distributed and secure method to compute global trust values, based on Power iteration. By having peers use these global trust values to choose the peers from whom they download, the network effectively identifies malicious peers and isolates them from the network.In simulations, this reputation system, called EigenTrust, has been shown to significantly decrease the number of inauthentic files on the network, even under a variety of conditions where malicious peers cooperate in an attempt to deliberately subvert the system.", "Managing trust is a problem of particular importance in peer-to-peer environments where one frequently encounters unknown agents. Existing methods for trust management, that are based on reputation, focus on the semantic properties of the trust model. They do not scale as they either rely on a central database or require to maintain global knowledge at each agent to provide data on earlier interactions. In this paper we present an approach that addresses the problem of reputation-based trust management at both the data management and the semantic level. We employ at both levels scalable data structures and algorithms that require no central control and allow to assess trust by computing an agents reputation from its former interactions with other agents. Thus the meethod can be implemented in a peer-to-peer environment and scales well for very large numbers of participants. We expect that scalable methods for trust management are an important factor, if fully decentralized peer-to-peer systems should become the platform for more serious applications than simple file exchange.", "We present a distributed scheme for trust inference in peer-to-peer networks. Our work is in the context of the NICE system, which is a platform for implementing cooperative applications over the Internet. We describe a technique for efficiently storing user reputation information in a completely decentralized manner, and show how this information can be used to efficiently identify non-cooperative users in NICE. We present a simulation-based study of our algorithms, in which we show our scheme scales to thousands of users using modest amounts of storage, processing, and bandwidth at any individual node. Lastly we show that our scheme is robust and can form cooperative groups in systems where the vast majority of users are malicious." ] }
1012.1367
2949692215
Online prediction methods are typically presented as serial algorithms running on a single processor. However, in the age of web-scale prediction problems, it is increasingly common to encounter situations where a single processor cannot keep up with the high rate at which inputs arrive. In this work, we present the algorithm, a method of converting many serial gradient-based online prediction algorithms into distributed algorithms. We prove a regret bound for this method that is asymptotically optimal for smooth convex loss functions and stochastic inputs. Moreover, our analysis explicitly takes into account communication latencies between nodes in the distributed environment. We show how our method can be used to solve the closely-related distributed stochastic optimization problem, achieving an asymptotically linear speed-up over multiple processors. Finally, we demonstrate the merits of our approach on a web-scale online prediction problem.
@cite_5 address the distributed online learning problem, with a similar motivation to ours: trying to address the scalability problem of online learning algorithms which are inherently sequential. The main observation @cite_5 make is that in many cases, computing the gradient takes much longer than computing the update according to the online prediction algorithm. Therefore, they present a pipeline computational model. Each worker alternates between computing the gradient and computing the update rule. The different workers are synchronized such that no two workers perform an update simultaneously.
{ "cite_N": [ "@cite_5" ], "mid": [ "2133233009" ], "abstract": [ "Online learning algorithms have impressive convergence properties when it comes to risk minimization and convex games on very large problems. However, they are inherently sequential in their design which prevents them from taking advantage of modern multi-core architectures. In this paper we prove that online learning with delayed updates converges well, thereby facilitating parallel online learning." ] }
1012.1367
2949692215
Online prediction methods are typically presented as serial algorithms running on a single processor. However, in the age of web-scale prediction problems, it is increasingly common to encounter situations where a single processor cannot keep up with the high rate at which inputs arrive. In this work, we present the algorithm, a method of converting many serial gradient-based online prediction algorithms into distributed algorithms. We prove a regret bound for this method that is asymptotically optimal for smooth convex loss functions and stochastic inputs. Moreover, our analysis explicitly takes into account communication latencies between nodes in the distributed environment. We show how our method can be used to solve the closely-related distributed stochastic optimization problem, achieving an asymptotically linear speed-up over multiple processors. Finally, we demonstrate the merits of our approach on a web-scale online prediction problem.
Similar to results presented in this paper, @cite_5 attempted to show that it is possible to achieve a cumulative regret of @math with @math parallel workers, compared to the @math of the na "ive solution. However their work suffers from a few limitations. First, their proofs only hold for unconstrained convex optimization where no projection is needed. Second, since they work in a model where one node at a time updates a shared predictor, while the other nodes compute gradients, the scalability of their proposed method is limited by the ratio between the time it takes to compute a gradient to the time it takes to run the update rule of the serial online learning algorithm.
{ "cite_N": [ "@cite_5" ], "mid": [ "2133233009" ], "abstract": [ "Online learning algorithms have impressive convergence properties when it comes to risk minimization and convex games on very large problems. However, they are inherently sequential in their design which prevents them from taking advantage of modern multi-core architectures. In this paper we prove that online learning with delayed updates converges well, thereby facilitating parallel online learning." ] }
1012.1367
2949692215
Online prediction methods are typically presented as serial algorithms running on a single processor. However, in the age of web-scale prediction problems, it is increasingly common to encounter situations where a single processor cannot keep up with the high rate at which inputs arrive. In this work, we present the algorithm, a method of converting many serial gradient-based online prediction algorithms into distributed algorithms. We prove a regret bound for this method that is asymptotically optimal for smooth convex loss functions and stochastic inputs. Moreover, our analysis explicitly takes into account communication latencies between nodes in the distributed environment. We show how our method can be used to solve the closely-related distributed stochastic optimization problem, achieving an asymptotically linear speed-up over multiple processors. Finally, we demonstrate the merits of our approach on a web-scale online prediction problem.
In another related work, @cite_1 present a distributed dual averaging method for optimization over networks. They assume the loss functions are Lipschitz continuous, but their gradients may not be. Their method does not need synchronization to average gradients computed at the same point. Instead, they employ a distributed consensus algorithm on all the gradients generated by different processors at different points. When applied to the stochastic online prediction setting, even for the most favorable class of communication graphs, with constant spectral gaps (e.g., expander graphs), their best regret bound is @math . This bound is no better than one would get by running @math parallel machines without communication (see ).
{ "cite_N": [ "@cite_1" ], "mid": [ "2120293976" ], "abstract": [ "The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using only local computation and communication. We develop and analyze distributed algorithms based on dual averaging of subgradients, and provide sharp bounds on their convergence rates as a function of the network size and topology. Our analysis clearly separates the convergence of the optimization algorithm itself from the effects of communication constraints arising from the network structure. We show that the number of iterations required by our algorithm scales inversely in the spectral gap of the network. The sharpness of this prediction is confirmed both by theoretical lower bounds and simulations for various networks." ] }
1012.1367
2949692215
Online prediction methods are typically presented as serial algorithms running on a single processor. However, in the age of web-scale prediction problems, it is increasingly common to encounter situations where a single processor cannot keep up with the high rate at which inputs arrive. In this work, we present the algorithm, a method of converting many serial gradient-based online prediction algorithms into distributed algorithms. We prove a regret bound for this method that is asymptotically optimal for smooth convex loss functions and stochastic inputs. Moreover, our analysis explicitly takes into account communication latencies between nodes in the distributed environment. We show how our method can be used to solve the closely-related distributed stochastic optimization problem, achieving an asymptotically linear speed-up over multiple processors. Finally, we demonstrate the merits of our approach on a web-scale online prediction problem.
In another recent work, @cite_8 study a method where each node in the network runs the classic stochastic gradient method, using random subsets of the overall data set, and only aggregate their solutions in the end (by averaging their final weight vectors). In terms of online regret, it is obviously the same as running @math machines independently without communication. So a more suitable measure is the optimality gap (defined in ) of the final averaged predictor. Even with respect to this measure, their expected optimality gap does not show advantage over running @math machines independently. A similar approach was also considered by @cite_6 and an experimental study of such a method was reported in @cite_0 .
{ "cite_N": [ "@cite_0", "@cite_6", "@cite_8" ], "mid": [ "1556849888", "2024484010", "2166706236" ], "abstract": [ "We present a new and simple algorithm for learning large margin classifiers that works in a truly online manner. The algorithm generates a linear classifier by averaging the weights associated with several perceptron-like algorithms run in parallel in order to approximate the Bayes point. A random subsample of the incoming data stream is used to ensure diversity in the perceptron solutions. We experimentally study the algorithm's performance on online and batch learning settings. The online experiments showed that our algorithm produces a low prediction error on the training sequence and tracks the presence of concept drift. On the batch problems its performance is comparable to the maximum margin algorithm which explicitly maximises the margin.", "This paper considers an important class of convex programming (CP) problems, namely, the stochastic composite optimization (SCO), whose objective function is given by the summation of general nonsmooth and smooth stochastic components. Since SCO covers non-smooth, smooth and stochastic CP as certain special cases, a valid lower bound on the rate of convergence for solving these problems is known from the classic complexity theory of convex programming. Note however that the optimization algorithms that can achieve this lower bound had never been developed. In this paper, we show that the simple mirror-descent stochastic approximation method exhibits the best-known rate of convergence for solving these problems. Our major contribution is to introduce the accelerated stochastic approximation (AC-SA) algorithm based on Nesterov’s optimal method for smooth CP (Nesterov in Doklady AN SSSR 269:543–547, 1983; Nesterov in Math Program 103:127–152, 2005), and show that the AC-SA algorithm can achieve the aforementioned lower bound on the rate of convergence for SCO. To the best of our knowledge, it is also the first universally optimal algorithm in the literature for solving non-smooth, smooth and stochastic CP problems. We illustrate the significant advantages of the AC-SA algorithm over existing methods in the context of solving a special but broad class of stochastic programming problems.", "With the increase in available data parallel machine learning has become an increasingly pressing problem. In this paper we present the first parallel stochastic gradient descent algorithm including a detailed analysis and experimental evidence. Unlike prior work on parallel optimization algorithms [5, 7] our variant comes with parallel acceleration guarantees and it poses no overly tight latency constraints, which might only be available in the multicore setting. Our analysis introduces a novel proof technique — contractive mappings to quantify the speed of convergence of parameter distributions to their asymptotic limits. As a side effect this answers the question of how quickly stochastic gradient descent algorithms reach the asymptotically normal regime [1, 8]." ] }
1012.1623
2950904175
This paper presents a creativity support tool, called FreePub, to collect and organize scienti?c material using mindmaps. Mindmaps are visual, graph-based represenations of concepts, ideas, notes, tasks, etc. They generally take a hierarchical or tree branch format, with ideas branching into their subsections. FreePub supports creativity cycles. A user starts such a cycle by setting up her domain of interest using mindmaps. Then, she can browse mindmaps and launch search tasks to gather relevant publications from several data sources. FreePub, besides publications, identi?es helpful supporting material (e.g., blog posts, presentations). All retrieved information from FreePub can be imported and organized in mindmaps. FreePub has been fully implemented on top of FreeMind, a popular open-source, mindmapping tool.
The use of mindmaps in information retrieval tasks has been acknowledged by several researchers. @cite_19 , the authors present how information retrieval on mind maps could be used to enhance expert search, document summarization, keyword based search engines, document recommender systems and determining word relatedness.
{ "cite_N": [ "@cite_19" ], "mid": [ "2064648698" ], "abstract": [ "Mind maps are used by millions of people. In this paper we present how information retrieval on mind maps could be used to enhance expert search, document summarization, keyword based search engines, document recommender systems and determining word relatedness. For instance, words in a mind map could be used for creating a skill profile of the mind maps' author and hence enhance expert search. This paper is a research-in-progress paper which means no research results are presented but only ideas." ] }
1012.1641
1563217556
Multicore parallel programming has some very difficult problems such as deadlocks during synchronizations and race conditions brought by concurrency. Added to the difficulty is the lack of a simple, well-accepted computing model for multicore architectures--because of that it is hard to develop powerful programming environments and debugging tools. To tackle the challenges, we promote a generalized stream computing model, inspired by previous researches on stream computing, that unifies parallelization strategies for programming language design, compiler design and operating system design. Our model provides a high-level abstraction in designing language constructs to convey concepts of concurrent operations, in organizing a program's runtime layout for parallel execution, and in scheduling concurrent instruction blocks through runtime and or operating systems. In this paper, we give a high-level description of the proposed model: we define the foundation of the model, show its simplicity through algebraic computational operation analysis, illustrate a programming framework enabled by the model, and demonstrate its potential through powerful design options for programming languages, compilers and operating systems.
A school of analytical models, represented by the most recent Multi-BSP @cite_22 , provides theoretical abstractions that bridge parallel algorithm designs with multicore architectures. One common trait of such models is their emphasize on portable performance. However, such motivation may not ensure engineering success, as there are more imminent concerns, such as indeterminacy, synchronization overhead, and scalability, that highlight the weaknesses of current parallel programming tools. In the Multi-BSP case, the model should be useful in analyzing a given parallel computing system in some multicore architectures, but the model does not provide enough engineering guidance in designing such a system. It is not surprising that the Multi-BSP model's hierarchically nested component structures resemble the stream entity hypergraphs in our model, but our stream entities have richer engineering implications.
{ "cite_N": [ "@cite_22" ], "mid": [ "1991300218" ], "abstract": [ "Writing software for one parallel system is a feasible though arduous task. Reusing the substantial intellectual effort so expended for programming a second system has proved much more challenging. In sequential computing algorithms textbooks and portable software are resources that enable software systems to be written that are efficiently portable across changing hardware platforms. These resources are currently lacking in the area of multi-core architectures, where a programmer seeking high performance has no comparable opportunity to build on the intellectual efforts of others. In order to address this problem we propose a bridging model aimed at capturing the most basic resource parameters of multi-core architectures. We suggest that the considerable intellectual effort needed for designing efficient algorithms for such architectures may be most fruitfully expended in designing portable algorithms, once and for all, for such a bridging model. Portable algorithms would contain efficient designs for all reasonable combinations of the basic resource parameters and input sizes, and would form the basis for implementation or compilation for particular machines. Our Multi-BSP model is a multi-level model that has explicit parameters for processor numbers, memory cache sizes, communication costs, and synchronization costs. The lowest level corresponds to shared memory or the PRAM, acknowledging the relevance of that model for whatever limitations on memory and processor numbers it may be efficacious to emulate it. We propose parameter-aware portable algorithms that run efficiently on all relevant architectures with any number of levels and any combination of parameters. For these algorithms we define a parameter-free notion of optimality. We show that for several fundamental problems, including standard matrix multiplication, the Fast Fourier Transform, and comparison sorting, there exist optimal portable algorithms in that sense, for all combinations of machine parameters. Thus some algorithmic generality and elegance can be found in this many parameter setting." ] }
1012.1641
1563217556
Multicore parallel programming has some very difficult problems such as deadlocks during synchronizations and race conditions brought by concurrency. Added to the difficulty is the lack of a simple, well-accepted computing model for multicore architectures--because of that it is hard to develop powerful programming environments and debugging tools. To tackle the challenges, we promote a generalized stream computing model, inspired by previous researches on stream computing, that unifies parallelization strategies for programming language design, compiler design and operating system design. Our model provides a high-level abstraction in designing language constructs to convey concepts of concurrent operations, in organizing a program's runtime layout for parallel execution, and in scheduling concurrent instruction blocks through runtime and or operating systems. In this paper, we give a high-level description of the proposed model: we define the foundation of the model, show its simplicity through algebraic computational operation analysis, illustrate a programming framework enabled by the model, and demonstrate its potential through powerful design options for programming languages, compilers and operating systems.
Jade @cite_12 represents an important experiment on parallel programming language design. Jade preserves the serial semantics of a program, implicitly exploits task parallelism, and moves data closer to processors. However, if it was designed for a smooth transition from sequential programming to parallel programming, Jade's dependence on its type system and explicit object-access control still exposes low-level synchronizations to programmers. While arguably such low-level controls provide programming flexibility, not all the details are essential to algorithm design. On the other hand, we might consider that Jade lacks the power for defining concurrent execution structures in a larger program-scope.
{ "cite_N": [ "@cite_12" ], "mid": [ "2028267160" ], "abstract": [ "Jade is a portable, implicitly parallel language designed for exploiting task-level concurrency.Jade programmers start with a program written in a standard serial, imperative language, then use Jade constructs to declare how parts of the program access data. The Jade implementation uses this data access information to automatically extract the concurrency and map the application onto the machine at hand. The resulting parallel execution preserves the semantics of the original serial program. We have implemented Jade as an extension to C, and Jade implementations exist for s hared-memory multiprocessors, homogeneous message-passing machines, and heterogeneous networks of workstations. In this atricle we discuss the design goals and decisions that determined the final form of Jade and present an overview of the Jade implementation. We also present our experience using Jade to implement several complete scientific and engineering applications. We use this experience to evaluate how the different Jade language features were used in practice and how well Jade as a whole supports the process of developing parallel applications. We find that the basic idea of preserving the serial semantics simplifies the program development process, and that the concept of using data access specifications to guide the parallelization offers significant advantages over more traditional control-based approaches. We also find that the Jade data model can interact poorly with concurrency patterns that write disjoint pieces of a single aggregate data structure, although this problem arises in only one of the applications." ] }
1012.1641
1563217556
Multicore parallel programming has some very difficult problems such as deadlocks during synchronizations and race conditions brought by concurrency. Added to the difficulty is the lack of a simple, well-accepted computing model for multicore architectures--because of that it is hard to develop powerful programming environments and debugging tools. To tackle the challenges, we promote a generalized stream computing model, inspired by previous researches on stream computing, that unifies parallelization strategies for programming language design, compiler design and operating system design. Our model provides a high-level abstraction in designing language constructs to convey concepts of concurrent operations, in organizing a program's runtime layout for parallel execution, and in scheduling concurrent instruction blocks through runtime and or operating systems. In this paper, we give a high-level description of the proposed model: we define the foundation of the model, show its simplicity through algebraic computational operation analysis, illustrate a programming framework enabled by the model, and demonstrate its potential through powerful design options for programming languages, compilers and operating systems.
Bl " a ser in @cite_24 reported a component-based language and operating system that supported concurrent and structured computing. In that system, a component encapsulates data and computing, which are defined as services to be produced by one component and consumed by another. The emphasis on relations among components is like ours, except that communications among related components use explicit message passing.
{ "cite_N": [ "@cite_24" ], "mid": [ "1996884724" ], "abstract": [ "With the advent of multi-processor machines, the time has definitively come to use new programming models that offer an improved support of concurrency. While various interesting new models have been recently presented for concurrent and structured programming, no appropriate runtime systems currently exists. Therefore, we have developed our own new operating system which has been particularly optimized for high-performance execution of such programs." ] }
1012.1641
1563217556
Multicore parallel programming has some very difficult problems such as deadlocks during synchronizations and race conditions brought by concurrency. Added to the difficulty is the lack of a simple, well-accepted computing model for multicore architectures--because of that it is hard to develop powerful programming environments and debugging tools. To tackle the challenges, we promote a generalized stream computing model, inspired by previous researches on stream computing, that unifies parallelization strategies for programming language design, compiler design and operating system design. Our model provides a high-level abstraction in designing language constructs to convey concepts of concurrent operations, in organizing a program's runtime layout for parallel execution, and in scheduling concurrent instruction blocks through runtime and or operating systems. In this paper, we give a high-level description of the proposed model: we define the foundation of the model, show its simplicity through algebraic computational operation analysis, illustrate a programming framework enabled by the model, and demonstrate its potential through powerful design options for programming languages, compilers and operating systems.
Finally, the GeneSC model is for concurrent computing at instruction-block level. Hardware-dependent considerations, such as supports to manycore and heterogeneous multicore, will rely on implementations, particularly those at runtime and operating system levels, such as those in @cite_50 @cite_59 @cite_39 .
{ "cite_N": [ "@cite_59", "@cite_50", "@cite_39" ], "mid": [ "64217489", "", "2157733805" ], "abstract": [ "We argue for space-time partitioning (STP) in manycore operating systems. STP divides resources such as cores, cache, and network bandwidth amongst interacting software components. Components are given unrestricted access to their resources and may schedule them in an application-specific fashion, which is critical for good parallel application performance. Components communicate via messages, which are strictly controlled to enhance correctness and security. We discuss properties of STP and ways in which hardware can assist STP. We introduce Tessellation, a new operating system built on top of STP, which restructures a traditional operating system as a set of distributed interacting services. In Tessellation, parallel applications can efficiently coexist and interact with one another.", "", "Helios is an operating system designed to simplify the task of writing, deploying, and tuning applications for heterogeneous platforms. Helios introduces satellite kernels, which export a single, uniform set of OS abstractions across CPUs of disparate architectures and performance characteristics. Access to I O services such as file systems are made transparent via remote message passing, which extends a standard microkernel message-passing abstraction to a satellite kernel infrastructure. Helios retargets applications to available ISAs by compiling from an intermediate language. To simplify deploying and tuning application performance, Helios exposes an affinity metric to developers. Affinity provides a hint to the operating system about whether a process would benefit from executing on the same platform as a service it depends upon. We developed satellite kernels for an XScale programmable I O card and for cache-coherent NUMA architectures. We offloaded several applications and operating system components, often by changing only a single line of metadata. We show up to a 28 performance improvement by offloading tasks to the XScale I O card. On a mail-server benchmark, we show a 39 improvement in performance by automatically splitting the application among multiple NUMA domains." ] }
1012.0027
2762633338
In this article we study the multicast routing problem in all-optical WDM networks under the spare light splitting constraint. To implement a multicast session, several light-trees may have to be used due to the limited fanouts of network nodes. Although many multicast routing algorithms have been proposed in order to reduce the total number of wavelength channels used (total cost) for a multicast session, the maximum number of wavelengths required in one fiber link (link stress) and the end-to-end delay are two parameters which are not always taken into consideration. It is known that the shortest path tree (SPT) results in the optimal end-to-end delay, but it can not be employed directly for multicast routing in sparse light splitting WDM networks. Hence, we propose a novel wavelength routing algorithm which tries to avoid the multicast incapable branching nodes (MIBs, branching nodes without splitting capability) in the shortest-path-based multicast tree to diminish the link stress. Good parts of the shortest-path-tree are retained by the algorithm to reduce the end-to-end delay. The algorithm consists of tree steps: (1) a DijkstraPro algorithm with priority assignment and node adoption is introduced to produce a SPT with up to 38 fewer MIB nodes in the NSF topology and 46 fewer MIB nodes in the USA Longhaul topology, (2) critical articulation and deepest branch heuristics are used to process the MIB nodes, (3) a distance-based light-tree reconnection algorithm is proposed to create the multicast light-trees. Extensive simulations demonstrate the algorithm’s efficiency in terms of link stress and end-to-end delay.
The difficulty of multicast routing in WDM networks with sparse light splitting has been addressed in many papers @cite_0 @cite_7 @cite_14 @cite_8 @cite_12 @cite_13 @cite_16 @cite_4 and various algorithms have been proposed. There are broadly three main categories according to the routing approaches they employ: (e.g., Reroute-to-Source and Reroute-to-Any @cite_0 ), (e.g., Member-Only @cite_0 and Virtual-Source Capacity-Priority @cite_7 ) and (e.g., Virtual Source-based @cite_14 @cite_8 ). Essentially, the approach constructs the multicast tree by connecting the source to each destination individually using the appropriate shortest path in order to minimize the per-source-receiver path cost. The objective of the schemes, however, is to minimize the overall cost of the multicast light-trees. The algorithm connects a subset of nodes, called core nodes, which have both light-splitting and wavelength-conversion capacities. The multicast session is then established with the help of this core structure @cite_14 @cite_8 . To the best of our knowledge from the literature @cite_0 , in WDM networks with sparse-splitting and without wavelength conversion the Member-Only algorithm yields the approximate minimal cost and the best link stress, while the Reroute-to-Source algorithm yields the optimal delay.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_7", "@cite_8", "@cite_0", "@cite_16", "@cite_13", "@cite_12" ], "mid": [ "2164236785", "2125385891", "223060195", "1976097620", "2123073286", "2117186966", "2016053525", "1526298436" ], "abstract": [ "Wavelength-division multiplexed (WDM) networks using wavelength-routing are considered to be potential candidates for the next generation wide-area backbone networks. This paper concerns with the problem of multicast routing in WDM networks. A node in the network may have both wavelength conversion and splitting capabilities which is called as virtual source. It is assumed that virtual sources are limited in number and are distributed evenly in the network. This paper proposes a new approach, which makes use of these special capable virtual sources to construct a multicast tree.", "As we know, the member-only algorithm in provides the best links stress and wavelength usage for the construction of multicast light-trees in WDM networks with sparse splitting. However, the diameter of tree is too big and the average delay is also too large, which are intolerant for QoS required multimedia applications. In this paper, a distance priority based algorithm is proposed to build light-trees for multicast routing, where the Candidate Destinations and the Candidate Connectors are introduced. Simulations show the proposed algorithm is able to greatly reduce the diameter and average delay of the multicast tree (up to 51 and 50 respectively), while keep the same or get a slightly better link stress as well as the wavelength usage than the famous Member-Only algorithm.", "", "This paper considers multicasting on wavelength-routing mesh optical networks. Although multicasting has been studied extensively in different network environments, multicasting in this environment is different, and more involved. The paper discusses the challenges of multicast support in optical wavelength routing networks, and reports on the advances made so far in this venue. The paper introduces a classification and a comparison of such techniques, and a study of their advantages and disadvantages.", "As wavelength division multiplexing (WDM) technology matures and multicast applications become increasingly popular, supporting multicast at the WDM layer becomes an important and yet challenging topic. In this paper, we study constrained multicast routing in WDM networks with sparse light splitting, i.e., where some switches are incapable of splitting light (of copying data in the optical domain) due to evolutional and or economical reasons. Specifically, we propose four WDM multicast routing algorithms, namely, re-route-to-source, re-route-to-any, member-first, and member-only. Given the network topology, multicast membership information, and light splitting capability of the switches, these algorithms construct a source-based multicast \"light-forest\" (consisting one or more multicast trees) for each multicast session. While the first two algorithms can build on a multicast tree constructed by IP (which does not take into consideration the splitting capability of the WDM switches), the last two algorithms attempt to address the joint problem of optimal multicast routing and sparse splitting in WDM networks. The performance of these algorithms are compared in terms of the average number of wavelengths used per forest (or multicast session), average number of branches involved (bandwidth) per forest as well as average number of hops encountered (delay) from a multicast source to a multicast member. The results obtained from this research should present new and exciting opportunities for further theoretical as well as experimental work.", "Although many multicast routing algorithms have been proposed in order to reduce the total cost in WDM optical networks, the link stress and delay are two parameters which are not always taken into consideration. This paper proposes a novel wavelength routing algorithm, which tries to avoid the multicast incapable branching nodes (MIB, branching nodes without splitting capability) to diminish the link stress for the shortest path based multicast tree and maintains good parts of the shortest path tree to reduce the end-to-end delay. Firstly a DijkstraPro algorithm with priority assignment and node adoption is introduced to produce a shortest path tree with up to 38 fewer MIB nodes, and then critical articulation and deepest branch heuristics are used to process the MIB nodes. Finally distance based reconnection algorithm is proposed to create the multicast tree or forest.", "In this paper, we investigate the problem of Multicast Routing in Sparse Splitting Networks (MR-SSN). Given a network topology with the multicast capable nodes distributed uniformly throughout the network, and a multicast session, the MR-SSN problem is to find a route from the source node of the multicast session to all destinations of the multicast session such that the total number of fibers used in establishing the session is minimized. In this paper, we develop a rerouting algorithm for a given Steiner tree, which makes it feasible to route a multicast session using a tree-based solution in sparse light splitting optical networks. In addition, we present a heuristic based on Tabu Search (TS) that requires only one transmitter for the source node and one wavelength for each multicast session. To evaluate the performance of heuristics, we formulate the MR-SSP problem as an integer linear program (ILP), and optimally solve small instances using the commercially available linear solver, CPLEX. We test our heuristic on a wide range of network topologies. Our experimental results show that: (1) The difference between our solution and ILP optimal solution, in terms of the number of fibers used for establishing a multicast session, is within 10 in almost all the instances and within 5 in about half of the instances. (2) The average delay, taken over all destination nodes, falls within three times the optimal value. (3) A sparse light splitting all-optical network with 30 of multicast capable cross-connects has an acceptable low cost and relatively good performance. (4) The improvement achieved by TS heuristic increases considerably when the session size is large, the number of Splitter-and-Delivery cross-connects is small, and the network connectivity is dense.", "Communication systems with all-optical multicasting have better performance than those using optical electrical optical conversion. Multicast protocols assume that all nodes in the network can forward the signal from one input to several outputs. Since fabrication of an optical switch with splitting capability is an expensive technology, there are few switches that are multicast capable. The heuristics designed for all-optical networks have to handle this limitation. We introduce a shortest path based forest algorithm for all-optical networks. We propose a post-processing algorithm to reduce the number of wavelengths needed. The effect of this post-processing algorithm is also examined in a well-known approach." ] }
1011.6397
1661411472
The Johnson-Lindenstrauss lemma is a fundamental result in probability with several applications in the design and analysis of algorithms in high dimensional geometry. Most known constructions of linear embeddings that satisfy the Johnson-Lindenstrauss property involve randomness. We address the question of explicitly constructing such embedding families and provide a construction with an almost optimal use of randomness: we use O(log(n delta)log(log(n delta) epsilon)) random bits for embedding n dimensions to O(log(1 delta) epsilon^2) dimensions with error probability at most delta, and distortion at most epsilon. In particular, for delta = 1 poly(n) and fixed epsilon, we use O(log n loglog n) random bits. Previous constructions required at least O(log^2 n) random bits to get polynomially small error.
Independently, Kane and Nelson @cite_12 obtained a construction that is similar in spirit to ours and achieves a slightly better seed-length of @math . Note that for the important case of @math polynomially small, our seed-length is the same as theirs.
{ "cite_N": [ "@cite_12" ], "mid": [ "2114773674" ], "abstract": [ "Recent work of [Dasgupta-Kumar-Sarlos, STOC 2010] gave a sparse Johnson-Lindenstrauss transform and left as a main open question whether their construction could be efficiently derandomized. We answer their question affirmatively by giving an alternative proof of their result requiring only bounded independence hash functions. Furthermore, the sparsity bound obtained in our proof is improved. The main ingredient in our proof is a spectral moment bound for quadratic forms that was recently used in [Diakonikolas-Kane-Nelson, FOCS 2010]." ] }
1011.6397
1661411472
The Johnson-Lindenstrauss lemma is a fundamental result in probability with several applications in the design and analysis of algorithms in high dimensional geometry. Most known constructions of linear embeddings that satisfy the Johnson-Lindenstrauss property involve randomness. We address the question of explicitly constructing such embedding families and provide a construction with an almost optimal use of randomness: we use O(log(n delta)log(log(n delta) epsilon)) random bits for embedding n dimensions to O(log(1 delta) epsilon^2) dimensions with error probability at most delta, and distortion at most epsilon. In particular, for delta = 1 poly(n) and fixed epsilon, we use O(log n loglog n) random bits. Previous constructions required at least O(log^2 n) random bits to get polynomially small error.
The works of @cite_20 and Meka and Zuckerman @cite_13 construct pseudorandom generators for degree @math threshold functions achieving a seed-length of @math for fooling with error at most @math . As derandomizing the JL lemma is a special case of fooling degree 2 PTFs, these works give a JL family with seed-length @math .
{ "cite_N": [ "@cite_13", "@cite_20" ], "mid": [ "2022644058", "2138092283" ], "abstract": [ "We study the natural question of constructing pseudorandom generators (PRGs) for low-degree polynomial threshold functions (PTFs). We give a PRG with seed-length log n eO(d) fooling degree d PTFs with error at most e. Previously, no nontrivial constructions were known even for quadratic threshold functions and constant error e. For the class of degree 1 threshold functions or halfspaces, we construct PRGs with much better dependence on the error parameter e and obtain the following results. A PRG with seed length O(log n log(1 e)) for error e ≥ 1 poly(n). A PRG with seed length O(log n) for e ≥ 1 poly(log n). Previously, only PRGs with seed length O(log n log2(1 e) e2) were known for halfspaces. We also obtain PRGs with similar seed lengths for fooling halfspaces over the @math dimensional unit sphere. The main theme of our constructions and analysis is the use of invariance principles to construct pseudorandom generators. We also introduce the notion of monotone read-once branching programs, which is key to improving the dependence on the error rate e for halfspaces. These techniques may be of independent interest.", "For an @math -variate degree– @math real polynomial @math , we prove that @math is determined up to an additive @math as long as @math is a @math -wise independent distribution over @math for @math . This gives a broad class of explicit pseudorandom generators against degree- @math boolean threshold functions, and answers an open question of (FOCS 2009)." ] }
1011.6397
1661411472
The Johnson-Lindenstrauss lemma is a fundamental result in probability with several applications in the design and analysis of algorithms in high dimensional geometry. Most known constructions of linear embeddings that satisfy the Johnson-Lindenstrauss property involve randomness. We address the question of explicitly constructing such embedding families and provide a construction with an almost optimal use of randomness: we use O(log(n delta)log(log(n delta) epsilon)) random bits for embedding n dimensions to O(log(1 delta) epsilon^2) dimensions with error probability at most delta, and distortion at most epsilon. In particular, for delta = 1 poly(n) and fixed epsilon, we use O(log n loglog n) random bits. Previous constructions required at least O(log^2 n) random bits to get polynomially small error.
The best known explicit JL family is the construction of Clarkson and Woodruff @cite_16 who show that a random scaled Bernoulli matrix with @math -wise independent entries satisfies the JL lemma. We make use of their result in our construction.
{ "cite_N": [ "@cite_16" ], "mid": [ "2059867647" ], "abstract": [ "We give near-optimal space bounds in the streaming model for linear algebra problems that include estimation of matrix products, linear regression, low-rank approximation, and approximation of matrix rank. In the streaming model, sketches of input matrices are maintained under updates of matrix entries; we prove results for turnstile updates, given in an arbitrary order. We give the first lower bounds known for the space needed by the sketches, for a given estimation error e. We sharpen prior upper bounds, with respect to combinations of space, failure probability, and number of passes. The sketch we use for matrix A is simply STA, where S is a sign matrix. Our results include the following upper and lower bounds on the bits of space needed for 1-pass algorithms. Here A is an n x d matrix, B is an n x d' matrix, and c := d+d'. These results are given for fixed failure probability; for failure probability δ>0, the upper bounds require a factor of log(1 δ) more space. We assume the inputs have integer entries specified by O(log(nc)) bits, or O(log(nd)) bits. (Matrix Product) Output matrix C with F(ATB-C) ≤ e F(A) F(B). We show that Θ(ce-2log(nc)) space is needed. (Linear Regression) For d'=1, so that B is a vector b, find x so that Ax-b ≤ (1+e) minx' ∈ Reald Ax'-b. We show that Θ(d2e-1 log(nd)) space is needed. (Rank-k Approximation) Find matrix tAk of rank no more than k, so that F(A-tAk) ≤ (1+e) F A-Ak , where Ak is the best rank-k approximation to A. Our lower bound is Ω(ke-1(n+d)log(nd)) space, and we give a one-pass algorithm matching this when A is given row-wise or column-wise. For general updates, we give a one-pass algorithm needing [O(ke-2(n + d e2)log(nd))] space. We also give upper and lower bounds for algorithms using multiple passes, and a sketching analog of the CUR decomposition." ] }
1011.6397
1661411472
The Johnson-Lindenstrauss lemma is a fundamental result in probability with several applications in the design and analysis of algorithms in high dimensional geometry. Most known constructions of linear embeddings that satisfy the Johnson-Lindenstrauss property involve randomness. We address the question of explicitly constructing such embedding families and provide a construction with an almost optimal use of randomness: we use O(log(n delta)log(log(n delta) epsilon)) random bits for embedding n dimensions to O(log(1 delta) epsilon^2) dimensions with error probability at most delta, and distortion at most epsilon. In particular, for delta = 1 poly(n) and fixed epsilon, we use O(log n loglog n) random bits. Previous constructions required at least O(log^2 n) random bits to get polynomially small error.
We also note that there are efficient non-black box derandomizations of JLL, @cite_3 , @cite_15 . These works, take as input @math points in @math , and deterministically compute an embedding (that depends on the input set) into @math which preserves all pairwise distances between the given set of @math points.
{ "cite_N": [ "@cite_15", "@cite_3" ], "mid": [ "2126274204", "1972939134" ], "abstract": [ "We point out how the methods of Nisan [31, 32], originally developed for derandomizing space-bounded computations, may be applied to obtain polynomial-time and NC derandomizations of several probabilistic algorithms. Our list includes the randomized rounding steps of linear and semi-definite programming relaxations of optimization problems, parallel derandomization of discrepancy-type problems, and the Johnson--Lindenstrauss lemma, to name a few.A fascinating aspect of this style of derandomization is the fact that we often carry out the derandomizations directly from the statements about the correctness of probabilistic algorithms, rather than carefully mimicking their proofs.", "The Johnson-Lindenstrauss lemma provides a way to map a number of points in high-dimensional space into a low-dimensional space, with only a small distortion of the distances between the points. The proofs of the lemma are non-constructive: they show that a random mapping induces small distortions with high probability, but they do not construct the actual mapping. In this paper, we provide a procedure that constructs such a mapping deterministically in time almost linear in the number of distances to preserve times the dimension of the original space. We then use that result (together with Nisan's pseudorandom generator) to obtain an efficient derandomization of several approximation algorithms based on semidefinite programming." ] }
1011.6461
2029827788
Interface adaptation allows code written for one interface to be used with a software component with another interface. When multiple adapters are chained together to make certain adaptations possible, we need a way to analyze how well the adaptation is done in case there are more than one chains that can be used. We introduce an approach to precisely analyzing the loss in an interface adapter chain using a simple form of abstract interpretation.
Ponnekanti and Fox @cite_7 suggests using interface adapter chaining for network services to handle the different interfaces available for similar types of services. They provide a way to query all services whose interfaces can be adapted to a known interface. They also support lossy adapters, but the support is limited to detecting whether a particular method and specific parameters can be handled at runtime. They do not provide a method to analyze the loss of an interface adapter chain, so they are unable to choose a chain with less loss when alternatives are available. Adapter chains are constructed through a rule system based on the source and target interfaces of adapters, which is similar to constructing a path in a graph through a blind search algorithm. They also support the composition of services in addition to transforming a single interface to another, which is accomplished by constructing a tree of interface adapters.
{ "cite_N": [ "@cite_7" ], "mid": [ "2168075985" ], "abstract": [ "To programmatically discover and interact with services in ubiquitous computing environments, an application needs to solve two problems: (1) is it semantically meaningful to interact with a service? If the task is \"printing a file\", a printer service would be appropriate, but a screen rendering service or CD player service would not. (2) If yes, what are the mechanics of interacting with the service - remote invocation mechanics, names of methods, numbers and types of arguments, etc.? Existing service frameworks such as Jini and UPnP conflate these problems - two services are \"semantically compatible\" if and only if their interface signatures match. As a result, interoperability is severely restricted unless there is a single, globally agreed-upon, unique interface for each service type. By separating the two subproblems and delegating different parts of the problem to the user and the system, we show how applications can interoperate with services even when globally unique interfaces do not exist for certain services." ] }
1011.6461
2029827788
Interface adaptation allows code written for one interface to be used with a software component with another interface. When multiple adapters are chained together to make certain adaptations possible, we need a way to analyze how well the adaptation is done in case there are more than one chains that can be used. We introduce an approach to precisely analyzing the loss in an interface adapter chain using a simple form of abstract interpretation.
@cite_6 describes an ad hoc scheme for analyzing the loss in interface adapter chains. The scheme is based on boolean matrixes which specify the methods required in a source interface to implement a method in a target interface. A mapping product is defined on these matrixes which computes the loss incurred when interface adapters are chained. The mathematical model they use is not rigorously constructed, however. They also only consider the adaptation of methods as a whole and do not handle the case where methods could handle certain arguments but not others.
{ "cite_N": [ "@cite_6" ], "mid": [ "2132136548" ], "abstract": [ "A key feature of ubiquitous computing is service continuity which allows a user to transparently continue his task regardless of his movement. For service continuity, the underlying system needs to not only discover a service satisfying a user's request, but also provide an interface differences resolution scheme if the interface of the service found is not the same as that of the service requested. For resolving interface mismatches, one of solutions is to use an interface adapter. The most serious problem in the interface adapter-based approach is the overhead of adapter generation. There are many research efforts about adapter generation load reduction and this paper focuses on an adapter chaining scheme to reduce the number of necessary adapters among different service interfaces. We propose a construction-time adaptation loss evaluation scheme and an adapter chain construction algorithm, which finds an adapter chain with minimal adaptation loss." ] }
1011.6596
2951888384
Aggregation is an important building block of modern distributed applications, allowing the determination of meaningful properties (e.g. network size, total storage capacity, average load, majorities, etc.) that are used to direct the execution of the system. However, the majority of the existing aggregation algorithms exhibit relevant dependability issues, when prospecting their use in real application environments. In this paper, we reveal some dependability issues of aggregation algorithms based on iterative averaging techniques, giving some directions to solve them. This class of algorithms is considered robust (when compared to common tree-based approaches), being independent from the used routing topology and providing an aggregation result at all nodes. However, their robustness is strongly challenged and their correctness often compromised, when changing the assumptions of their working environment to more realistic ones. The correctness of this class of algorithms relies on the maintenance of a fundamental invariant, commonly designated as "mass conservation". We will argue that this main invariant is often broken in practical settings, and that additional mechanisms and modifications are required to maintain it, incurring in some degradation of the algorithms performance. In particular, we discuss the behavior of three representative algorithms Push-Sum Protocol, Push-Pull Gossip protocol and Distributed Random Grouping under asynchronous and faulty (with message loss and node crashes) environments. More specifically, we propose and evaluate two new versions of the Push-Pull Gossip protocol, which solve its message interleaving problem (evidenced even in a synchronous operation mode).
Classical approaches, like TAG @cite_18 , perform a tree-based aggregation where partial aggregates are successively computed from child nodes to their parents until the root of the aggregation tree is reached (requiring the existence of a specific routing topology). This kind of aggregation technique is often applied in practice to Wireless Sensor Network (WSN) @cite_7 . Other tree-based aggregation approaches can be found in @cite_3 , and @cite_9 . We should point out that, although being energy-efficient, the reliability of these approaches may be strongly affected by the inherent presence of single-points of failure in the aggregation structure.
{ "cite_N": [ "@cite_9", "@cite_18", "@cite_3", "@cite_7" ], "mid": [ "", "2167396179", "2023289350", "2122179278" ], "abstract": [ "", "We present the Tiny AGgregation (TAG) service for aggregation in low-power, distributed, wireless environments. TAG allows users to express simple, declarative queries and have them distributed and executed efficiently in networks of low-power, wireless sensors. We discuss various generic properties of aggregates, and show how those properties affect the performance of our in network approach. We include a performance study demonstrating the advantages of our approach over traditional centralized, out-of-network methods, and discuss a variety of optimizations for improving the performance and fault tolerance of the basic solution.", "Peer-to-peer (P2P) networks represent an effective way to share information, since there are no central points of failure or bottleneck. However, the flip side to the distributive nature of P2P networks is that it is not trivial to aggregate and broadcast global information efficiently. We believe that this aggregation broadcast functionality is a fundamental service that should be layered over existing Distributed Hash Tables (DHTs), and in this work, we design a novel algorithm for this purpose. Specifically, we build an aggregation broadcast tree in a bottom-up fashion by mapping nodes to their parents in the tree with a parent function. The particular parent function family we propose allows the efficient construction of multiple interior-node-disjoint trees, thus preventing single points of failure in tree structures. In this way, we provide DHTs with an ability to collect and disseminate information efficiently on a global scale. Simulation results demonstrate that our algorithm is efficient and robust.", "We show how the database community's notion of a generic query interface for data aggregation can be applied to ad-hoc networks of sensor devices. As has been noted in the sensor network literature, aggregation is important as a data reduction tool; networking approaches, however, have focused on application specific solutions, whereas our in-network aggregation approach is driven by a general purpose, SQL-style interface that can execute queries over any type of sensor data while providing opportunities for significant optimization. We present a variety of techniques to improve the reliability and performance of our solution. We also show how grouped aggregates can be efficiently computed and offer a comparison to related systems and database projects." ] }
1011.6596
2951888384
Aggregation is an important building block of modern distributed applications, allowing the determination of meaningful properties (e.g. network size, total storage capacity, average load, majorities, etc.) that are used to direct the execution of the system. However, the majority of the existing aggregation algorithms exhibit relevant dependability issues, when prospecting their use in real application environments. In this paper, we reveal some dependability issues of aggregation algorithms based on iterative averaging techniques, giving some directions to solve them. This class of algorithms is considered robust (when compared to common tree-based approaches), being independent from the used routing topology and providing an aggregation result at all nodes. However, their robustness is strongly challenged and their correctness often compromised, when changing the assumptions of their working environment to more realistic ones. The correctness of this class of algorithms relies on the maintenance of a fundamental invariant, commonly designated as "mass conservation". We will argue that this main invariant is often broken in practical settings, and that additional mechanisms and modifications are required to maintain it, incurring in some degradation of the algorithms performance. In particular, we discuss the behavior of three representative algorithms Push-Sum Protocol, Push-Pull Gossip protocol and Distributed Random Grouping under asynchronous and faulty (with message loss and node crashes) environments. More specifically, we propose and evaluate two new versions of the Push-Pull Gossip protocol, which solve its message interleaving problem (evidenced even in a synchronous operation mode).
Alternative aggregation algorithms based on the application of methods, can also be found in the literature. This is the case of Extrema Propagation @cite_6 and COMP @cite_12 , which reduce the computation of an aggregation function to the determination of the minimum maximum of a collection of random numbers. These two techniques tend to emphasize speed, being less accurate than averaging approaches.
{ "cite_N": [ "@cite_12", "@cite_6" ], "mid": [ "2119098504", "2135614224" ], "abstract": [ "Motivated by applications to sensor, peer-to-peer, and ad-hoc networks, we study the problem of computing functions of values at the nodes in a network in a totally distributed manner. In particular, we consider separable functions, which can be written as linear combinations of functions of individual variables. Known iterative algorithms for averaging can be used to compute the normalized values of such functions, but these algorithms do not extend in general to the computation of the actual values of separable functions.The main contribution of this paper is the design of a distributed randomized algorithm for computing separable functions based on properties of exponential random variables. We bound the running time of our algorithm in terms of the running time of an information spreading algorithm used as a subroutine by the algorithm. Since we are interested in totally distributed algorithms, we consider a randomized gossip mechanism for information spreading as the subroutine. Combining these algorithms yields a complete and simple distributed algorithm for computing separable functions.The second contribution of this paper is an analysis of the information spreading time of the gossip algorithm. This analysis yields an upper bound on the information spreading time, and therefore a corresponding upper bound on the running time of the algorithm for computing separable functions, in terms of the conductance of an appropriate stochastic matrix. These bounds imply that, for a class of graphs with small spectral gap (such as grid graphs), the time used by our algorithm to compute averages is of a smaller order than the time required for the computation of averages by a known iterative gossip scheme [5].", "Aggregation of data values plays an important role on distributed computations, in particular over peer-to-peer and sensor networks, as it can provide a summary of some global system property and direct the actions of self-adaptive distributed algorithms. Examples include using estimates of the network size to dimension distributed hash tables or estimates of the average system load to direct load-balancing. Distributed aggregation using non-idempotent functions, like sums, is not trivial as it is not easy to prevent a given value from being accounted for multiple times; this is especially the case if no centralized algorithms or global identifiers can be used. This paper introduces Extrema Propagation, a probabilistic technique for distributed estimation of the sum of positive real numbers. The technique relies on the exchange of duplicate insensitive messages and can be applied in flood and or epidemic settings, where multi-path routing occurs; it is tolerant of message loss; it is fast, as the number of message exchange steps equals the diameter; and it is fully distributed, with no single point of failure and the result produced at every node." ] }
1011.6663
2109244346
Generalized differential cohomology theories, in particular differential K-theory (often called “smooth K-theory”), are becoming an important tool in differential geometry and in mathematical physics.In this survey, we describe the developments of the recent decades in this area. In particular, we discuss axiomatic characterizations of differential K-theory (and that these uniquely characterize differential K-theory). We describe several explicit constructions, based on vector bundles, on families of differential operators, or using homotopy theory and classifying spaces. We explain the most important properties, in particular about the multiplicative structure and push-forward maps and will state versions of the Riemann–Roch theorem and of Atiyah–Singer family index theorem for differential K-theory.
The thesis @cite_35 discusses a model of even differential K-theory using vector bundles with connection and push-forward maps in this model. The work culminates in the special case of the index theorem for differential K-theory if @math is the point.
{ "cite_N": [ "@cite_35" ], "mid": [ "2077002798" ], "abstract": [ "Let X --> B be a proper submersion with a Riemannian structure. Given a differential K-theory class on X, we define its analytic and topological indices as differential K-theory classes on B. We prove that the two indices are the same." ] }
1011.5064
2113250405
Cloud computing provides a computing platform for the users to meet their demands in an efficient, cost-effective way. Virtualization technologies are used in the clouds to aid the efficient usage of hardware. Virtual machines (VMs) are utilized to satisfy the user needs and are placed on physical machines (PMs) of the cloud for effective usage of hardware resources and electricity in the cloud. Optimizing the number of PMs used helps in cutting down the power consumption by a substantial amount. In this paper, we present an optimal technique to map virtual machines to physical machines (nodes) such that the number of required nodes is minimized. We provide two approaches based on linear programming and quadratic programming techniques that significantly improve over the existing theoretical bounds and efficiently solve the problem of virtual machine (VM) placement in data centers.
* -1mm One dimensional bin packing problem has been studied extensively. Fernandez de la Vega and Lueker @cite_11 gave the first Asymptotic Polynomial-Time Approximation Scheme (APTAS). They put forward a rounding technique that allowed them to reduce the problem of packing large items to finding an optimum packing of just a constant number of items (at a cost of @math times the optimal solution - @math ). Their algorithm was later improved by Karmarkar and Karp @cite_21 , to a (1+ @math )- @math bound.
{ "cite_N": [ "@cite_21", "@cite_11" ], "mid": [ "2123068254", "2145028977" ], "abstract": [ "We present several polynomial-time approximation algorithms for the one-dimensional bin-packing problem. using a subroutine to solve a certain linear programming relaxation of the problem. Our main results are as follows: There is a polynomial-time algorithm A such that A(I) ≤ OPT(I) + O(log2 OPT(I)). There is a polynomial-time algorithm A such that, if m(I) denotes the number of distinct sizes of pieces occurring in instance I, then A(I) ≤ OPT(I) + O(log2 m(I)). There is an approximation scheme which accepts as input an instance I and a positive real number e, and produces as output a packing using as most (1 + e) OPT(I) + O(e-2) bins. Its execution time is O(e-c n log n), where c is a constant. These are the best asymptotic performance bounds that have been achieved to date for polynomial-time bin-packing. Each of our algorithms makes at most O(log n) calls on the LP relaxation subroutine and takes at most O(n log n) time for other operations. The LP relaxation of bin packing was solved efficiently in practice by Gilmore and Gomory. We prove its membership in P, despite the fact that it has an astronomically large number of variables.", "For any listL ofn numbers in (0, 1) letL* denote the minimum number of unit capacity bins needed to pack the elements ofL. We prove that, for every positive e, there exists anO(n)-time algorithmS such that, ifS(L) denotes the number of bins used byS forL, thenS(L) L*≦1+e for anyL providedL* is sufficiently large." ] }
1011.5064
2113250405
Cloud computing provides a computing platform for the users to meet their demands in an efficient, cost-effective way. Virtualization technologies are used in the clouds to aid the efficient usage of hardware. Virtual machines (VMs) are utilized to satisfy the user needs and are placed on physical machines (PMs) of the cloud for effective usage of hardware resources and electricity in the cloud. Optimizing the number of PMs used helps in cutting down the power consumption by a substantial amount. In this paper, we present an optimal technique to map virtual machines to physical machines (nodes) such that the number of required nodes is minimized. We provide two approaches based on linear programming and quadratic programming techniques that significantly improve over the existing theoretical bounds and efficiently solve the problem of virtual machine (VM) placement in data centers.
Several modified versions of First-Fit Decrease (FFD) have been used for VM placements. @cite_25 propose an algorithm to pack VMs optimally while minimizing the number of migrations. @cite_8 propose a reconfiguration algorithm to cut down the wastage of physical resources. @cite_16 propose an iterative rearragement technique for improving placements in a dynamic scenario. @cite_17 presents a dynamic algorithm that forecasts the resource demands and packs VMs. @cite_22 propose a simple heuristic which aims to efficiently allocate resources.
{ "cite_N": [ "@cite_22", "@cite_8", "@cite_16", "@cite_25", "@cite_17" ], "mid": [ "1578351684", "2103375760", "2096284451", "1527264199", "2160756722" ], "abstract": [ "Current web server farms have simple resource allocation models. One model used is to dedicate a server or a group of servers for each customer. Another model partitions physical servers into logical servers and assigns one to each customer. Yet another model allows customers to be active on multiple servers using load-balancing techniques. The ability to handle peak loads while minimizing cost of resources required on the farm is a subject of ongoing research.We improve resource utilization through sharing. Customer load is expressed as a multidimensional probability distribution. Each customer is assigned to a server so as to minimize the total number of servers needed to host all the customers. We use the notion of complementarity of customers in simple heuristics for this stochastic vector-packing problem. The proposed method generates a resource allocation plan while guaranteeing a QoS to each customer. Simulation results justify our scheme.", "As businesses have grown, so has the need to deploy I T applications rapidly to support the expanding business processes. Often, this growth was achieved in an unplanned way: each time a new application was needed a new server along with the application software was deployed and new storage elements were purchased. In many cases this has led to what is often referred to as \"server sprawl\", resulting in low server utilization and high system management costs. An architectural approach that is becoming increasingly popular to address this problem is known as server virtualization. In this paper we introduce the concept of server consolidation using virtualization and point out associated issues that arise in the area of application performance. We show how some of these problems can be solved by monitoring key performance metrics and using the data to trigger migration of Virtual Machines within physical servers. The algorithms we present attempt to minimize the cost of migration and maintain acceptable application performance levels.", "We present a high level overview of a virtual machine placement system in which an autonomic controller dynamically manages the mapping of virtual machines onto physical hosts in accordance with policies specified by the user. By closely monitoring virtual machine activity and employing advanced policies for dynamic workload placement, such an autonomic solution can achieve substantial cost savings from better utilization of computing resources and less frequent overload situations.", "Workload placement on servers has been traditionally driven by mainly performance objectives. In this work, we investigate the design, implementation, and evaluation of a power-aware application placement controller in the context of an environment with heterogeneous virtualized server clusters. The placement component of the application management middleware takes into account the power and migration costs in addition to the performance benefit while placing the application containers on the physical servers. The contribution of this work is two-fold: first, we present multiple ways to capture the cost-aware application placement problem that may be applied to various settings. For each formulation, we provide details on the kind of information required to solve the problems, the model assumptions, and the practicality of the assumptions on real servers. In the second part of our study, we present the pMapper architecture and placement algorithms to solve one practical formulation of the problem: minimizing power subject to a fixed performance requirement. We present comprehensive theoretical and experimental evidence to establish the efficacy of pMapper.", "A dynamic server migration and consolidation algorithm is introduced. The algorithm is shown to provide substantial improvement over static server consolidation in reducing the amount of required capacity and the rate of service level agreement violations. Benefits accrue for workloads that are variable and can be forecast over intervals shorter than the time scale of demand variability. The management algorithm reduces the amount of physical capacity required to support a specified rate of SLA violations for a given workload by as much as 50 as compared to static consolidation approach. Another result is that the rate of SLA violations at fixed capacity may be reduced by up to 20 . The results are based on hundreds of production workload traces across a variety of operating systems, applications, and industries." ] }
1011.5164
2109456760
This work presents the design and implementation of our Browser-based Massively Multiplayer Online Game, Living City, a simulation game fully developed at the University of Messina. Living City is a persistent and real-time digital world, running in the Web browser environment and accessible from users without any client-side installation. Today Massively Multiplayer Online Games attract the attention of Computer Scientists both for their architectural peculiarity and the close interconnection with the social network phenomenon. We will cover these two aspects paying particular attention to some aspects of the project: game balancing (e.g. algorithms behind time and money balancing); business logic (e.g., handling concurrency, cheating avoidance and availability) and, finally, social and psychological aspects involved in the collaboration of players, analyzing their activities and interconnections.
The fast growth and the attention MMOGs captured is witnessed by several valid works on the subject; they focus on architectural issues (e.g distribution techniques @cite_17 , load balancing @cite_13 , persistence @cite_4 , etc.) and Software Engineering, e.g. usability @cite_18 , performances measurement @cite_3 , services platform @cite_8 .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_8", "@cite_3", "@cite_13", "@cite_17" ], "mid": [ "2097063564", "2132432849", "2096984581", "2123282487", "2150694596", "2124035126" ], "abstract": [ "This study examines the usability challenges faced by new players of massively multiplayer online role-playing games (MMORPGs), one of the fastest-growing segments of the video game industry. Played in completely online worlds, these games allow players to communicate with one another, form groups and communities, and compete in a variety of fantasy environments.Nineteen subjects participated in an exploratory usability study of four games, three MMORPGs and a similar single-player game used for comparison. Results reveal that many people not usually considered as potential players of these games may be interested in them, but a wide variety of usability issues present serious problems for players inexperienced with the genre. Based on an analysis of the usability data and player feedback, specific recommendations are made to improve the experience of these games for new players. These results further demonstrate the applicability and importance of usability testing to video games.", "The most important asset of a Massively Multiplayer Online Game is its world state, as it represents the combined efforts and progress of all its participants. Thus, it is extremely important that this state is not lost in case of server failures. Survival of the world state is typically achieved by making it persistent, e.g., by storing it in a relational database. The main challenge of this approach is to track the large volume of modifications applied to the world in real time. This paper compares a variety of strategies to persist changes of the game world. While critical events must be written synchronously to the persistent storage, a set of approximation strategies are discussed and compared that are suitable for events with low consistency requirements, such as player movements. An analysis to better understand the possible limitations and bottlenecks of these strategies is presented using experimental data from an MMOG research framework. Our analysis shows that a distance-based solution offers the scalability and efficiency required for large-scale games as well as offering error bounds and eliminating unnecessary updates associated with localized movement.", "Large-scale multiplayer online games require considerable investment in hosting infrastructures. However, the difficulty of predicting the success of a new title makes investing in dedicated server and network resources very risky. A shared infrastructure based on utility computing models to support multiple games offers an attractive option for game providers whose core competency is not in managing large server deployments.In this paper we describe a prototype implementation of a shared, on demand service platform for online games. The platform builds on open standards and off-the-shelf software developed to support utility computing offerings for Web-based business applications. We describe our early experience with identifying appropriate performance metrics for provisioning game servers and with implementing the platform components that we consider essential for its acceptance.", "This paper presents traffic measurement of a Massively Multiplayer On-line Role Playing Game (MMORPG). This analysis characterizes the MMORPG traffic and shows its implications for future research issues. The target game is 'Lineage II' developed by NCsoft, which is one of the world's largest MMORPGs in terms of the number of concurrent users. We collected about 1 tera bytes of packets for four consecutive days including a weekend. The MMORPG traffic consists of two kinds of packets: client-generated packets and server-generated packets. We observe that the client packet has an average of 19 bytes payload size, while the average payload size of server packets is about 318 bytes. This asymmetry is due to the fact that the server transmits all the information to construct the visual environment for the clients in the same region. Likewise, the bandwidth usage of the server traffic is about ten times larger than that of the client traffic. The analysis of RTT reveals that client packets and server packets are transmitted mostly at the interval of 200 milliseconds due to TCP's delayed ACK. We find that there is a linear relationship between the number of users and the bandwidth usage except when the number of users is around 5000.", "Supporting thousands, possibly hundreds of thousands, of players is a requirement that must be satisfied when delivering server based online gaming as a commercial concern. Such a requirement may be satisfied by utilising the cumulative processing resources afforded by a cluster of servers. Clustering of servers allow great flexibility, as the game provider may add servers to satisfy an increase in processing demands, more players, or remove servers for routine maintenance or upgrading. If care is not taken, the way processing demands are distributed across a cluster of servers may hinder such flexibility and also hinder player interaction within a game. In this paper we present an approach to load balancing that is simple and effective, yet maintains the flexibility of a cluster while promoting player interaction.", "In this paper, we propose a new distributed event delivery method for MMORPG (Massively Multiplayer Online Role Playing Games). In our method, the whole game space is divided into multiple sub spaces with the same size and some player nodes are selected as responsible nodes to deliver game events occurring in their responsible sub spaces. Our method includes (1) a load balancing mechanism which allows each responsible node for the crowded sub space to dynamically construct a tree of multiple nodes and deliver events along the tree to reduce event forwarding overhead per node, (2) a technique to reduce end-to-end event delivery delay by dynamically replacing nodes in the tree, and (3) a technique to efficiently and seamlessly switch sub spaces to be observed while each player's view moves around in the game space. Through experiments, we show that our method achieves practical performance for MMORPG." ] }
1011.5168
1600978165
Online Social Networks (OSN) during last years acquired a huge and increasing popularity as one of the most important emerging Web phenomena, deeply modifying the behavior of users and contributing to build a solid substrate of connections and relationships among people using the Web. In this preliminary work paper, our purpose is to analyze Facebook, considering a signi�cant sample of data re ecting relationships among subscribed users. Our goal is to extract, from this platform, relevant information about the distribution of these relations and exploit tools and algorithms provided by the Social Network Analysis (SNA) to discover and, possibly, understand underlying similarities between the developing of OSN and real-life social networks.
Literature on Web (and social Web) data extraction is growing: @cite_18 provided a comprehensive survey on applications and techniques. In @cite_14 , Ferrara and Baumgartner developed some techniques for automatic wrapper adaptation. A slightly modified version of that algorithm, relying on analyzing structural similarities inside the DOM tree structure of Facebook friend-list pages, is the core of the agent used here to gather data.
{ "cite_N": [ "@cite_18", "@cite_14" ], "mid": [ "2148317291", "2124157324" ], "abstract": [ "Web Data Extraction is an important problem that has been studied by means of different scientific tools and in a broad range of applications. Many approaches to extracting data from the Web have been designed to solve specific problems and operate in ad-hoc domains. Other approaches, instead, heavily reuse techniques and algorithms developed in the field of Information Extraction.This survey aims at providing a structured and comprehensive overview of the literature in the field of Web Data Extraction. We provided a simple classification framework in which existing Web Data Extraction applications are grouped into two main classes, namely applications at the Enterprise level and at the Social Web level. At the Enterprise level, Web Data Extraction techniques emerge as a key tool to perform data analysis in Business and Competitive Intelligence systems as well as for business process re-engineering. At the Social Web level, Web Data Extraction techniques allow to gather a large amount of structured data continuously generated and disseminated by Web 2.0, Social Media and Online Social Network users and this offers unprecedented opportunities to analyze human behavior at a very large scale. We discuss also the potential of cross-fertilization, i.e., on the possibility of re-using Web Data Extraction techniques originally designed to work in a given domain, in other domains.", "Information distributed through the Web keeps growing faster day by day, and for this reason, several techniques for extracting Web data have been suggested during last years. Often, extraction tasks are performed through so called wrappers, procedures extracting information from Web pages, e.g. implementing logic-based techniques. Many fields of application today require a strong degree of robustness of wrappers, in order not to compromise assets of information or reliability of data extracted." ] }
1011.5168
1600978165
Online Social Networks (OSN) during last years acquired a huge and increasing popularity as one of the most important emerging Web phenomena, deeply modifying the behavior of users and contributing to build a solid substrate of connections and relationships among people using the Web. In this preliminary work paper, our purpose is to analyze Facebook, considering a signi�cant sample of data re ecting relationships among subscribed users. Our goal is to extract, from this platform, relevant information about the distribution of these relations and exploit tools and algorithms provided by the Social Network Analysis (SNA) to discover and, possibly, understand underlying similarities between the developing of OSN and real-life social networks.
A common SNA task is to discover, if existing, aggregations and subsets of nodes playing similar roles or occupying a particular position in a network @cite_11 . Some strictly connected problems are related to optimizing the visual representation of graphs @cite_21 ; for large social networks graphs is not trivial to find a meaningful graphical representation, because of the number of elements to display, and finding algorithms for the planar embedding of the graph, so as reducing (or eliminating) intersecting edges and improving aesthetic and functional characteristics of the graph itself, is part of the solution @cite_17 .
{ "cite_N": [ "@cite_21", "@cite_17", "@cite_11" ], "mid": [ "1992709202", "2121655825", "1554338108" ], "abstract": [ "Several data presentation problems involve drawing graphs so that they are easy to read and understand. Examples include circuit schematics and diagrams for information systems analysis and design. In this paper we present a bibliographic survey on algorithms whose goal is to produce aesthetically pleasing drawings of graphs. Research on this topic is spread over the broad spectrum of computer science. This bibliography constitutes a first attempt to encompass both theoretical and application-oriented papers from disparate areas.", "We present new O(n)-time methods for planar embedding and Kuratowski subgraph isolation that were inspired by the Booth-Lueker PQ-tree implementation of the Lempel-Even-Cederbaum vertex addition method. In this paper, we improve upon our conference proceedings formulation and upon the Shih-Hsu PC-tree, both of which perform comprehensive tests of planarity conditions embedding the edges from a vertex to its descendants in a ‘batch’ vertex addition operation. These tests are simpler than but analogous to the templating scheme of the PQ-tree. Instead, we take the edge to be the fundamental unit of addition to the partial embedding while preserving planarity. This eliminates the batch planarity condition testing in favor of a few localized decisions of a path traversal process, and it exploits the fact that subgraphs can become biconnected by adding a single edge. Our method is presented using only graph constructs, but our definition of external activity, path traversal process and theoretical analysis of correctness can be applied to optimize the PC-tree", "Acknowledgements 1. Introduction Stanley Wasserman, John Scott and Peter J. Carrington 2. Recent developments in network measurement Peter V. Marsden 3. Network sampling and model fitting Ove Frank 4. Extending centrality Martin Everett and Stephen P. Borgatti 5. Positional analyses of sociometric data Patrick Doreian, Vladimir Batagelj and Anuska Ferligoj 6. Network models and methods for studying the diffusion of innovations Thomas W. Valente 7. Using correspondence analysis for joint displays of affiliation networks Katherine Faust 8. An introduction to random graphs, dependence graphs, and p* Stanley Wasserman and Garry Robins 9. Random graph models for social networks: multiple relations or multiple raters Laura M. Koehly and Philippa Pattison 10. Interdependencies and social processes: dependence graphs and generalized dependence structures Garry Robins and Philippa Pattison 11. Models for longitudinal network data Tom A. B. Snijders 12. Graphical techniques for exploring social network data Linton C. Freeman 13. Software for social network analysis Mark Huisman and Marijtje A. J. van Duijn Index." ] }
1011.5168
1600978165
Online Social Networks (OSN) during last years acquired a huge and increasing popularity as one of the most important emerging Web phenomena, deeply modifying the behavior of users and contributing to build a solid substrate of connections and relationships among people using the Web. In this preliminary work paper, our purpose is to analyze Facebook, considering a signi�cant sample of data re ecting relationships among subscribed users. Our goal is to extract, from this platform, relevant information about the distribution of these relations and exploit tools and algorithms provided by the Social Network Analysis (SNA) to discover and, possibly, understand underlying similarities between the developing of OSN and real-life social networks.
Several SNA tools have been developed during the last years: GUESS @cite_3 focuses on improving the interactive exploration of graphs; NodeXL @cite_23 , developed as an add-in to the Microsoft Excel 2007 spreadsheet software, provides tools for network overview, discovery and exploration. LogAnalysis @cite_10 helps forensic analysts in visual statistical analysis of mobile phone traffic networks. Jung @cite_6 and Prefuse @cite_19 provide Java APIs implementing algorithms and methods for building applications for graphical visualization and SNA for graphs.
{ "cite_N": [ "@cite_6", "@cite_3", "@cite_19", "@cite_23", "@cite_10" ], "mid": [ "105330602", "2113630591", "2165741325", "2135844668", "1984801506" ], "abstract": [ "The JUNG (Java Universal Network Graph) Framework is a free, open-source software library that provides a common and extendible language for the manipulation, analysis, and visualization of data that can be represented as a graph or network. It is written in the Java programming language, allowing JUNG-based applications to make use of the extensive built-in capabilities of the Java Application Programming Interface (API), as well as those of other existing third-party Java libraries. We describe the design, and some details of the implementation, of the JUNG architecture, and provide illustrative examples of its use.", "As graph models are applied to more widely varying fields, researchers struggle with tools for exploring and analyzing these structures. We describe GUESS, a novel system for graph exploration that combines an interpreted language with a graphical front end that allows researchers to rapidly prototype and deploy new visualizations. GUESS also contains a novel, interactive interpreter that connects the language and interface in a way that facilities exploratory visualization tasks. Our language, Gython, is a domain-specific embedded language which provides all the advantages of Python with new, graph specific operators, primitives, and shortcuts. We highlight key aspects of the system in the context of a large user survey and specific, real-world, case studies ranging from social and knowledge networks to distributed computer network analysis.", "Although information visualization (infovis) technologies have proven indispensable tools for making sense of complex data, wide-spread deployment has yet to take hold, as successful infovis applications are often difficult to author and require domain-specific customization. To address these issues, we have created prefuse, a software framework for creating dynamic visualizations of both structured and unstructured data. prefuse provides theoretically-motivated abstractions for the design of a wide range of visualization applications, enabling programmers to string together desired components quickly to create and customize working visualizations. To evaluate prefuse we have built both existing and novel visualizations testing the toolkit's flexibility and performance, and have run usability studies and usage surveys finding that programmers find the toolkit usable and effective.", "We present NodeXL, an extendible toolkit for network overview, discovery and exploration implemented as an add-in to the Microsoft Excel 2007 spreadsheet software. We demonstrate NodeXL data analysis and visualization features with a social media data sample drawn from an enterprise intranet social network. A sequence of NodeXL operations from data import to computation of network statistics and refinement of network visualization through sorting, filtering, and clustering functions is described. These operations reveal sociologically relevant differences in the patterns of interconnection among employee participants in the social media space. The tool and method can be broadly applied.", "In this paper we present our tool LogAnalysis for forensic visual statistical analysis of mobile phone traffic. LogAnalysis graphically represents the relationships among mobile phone users with a node-link layout. Its aim is to explore the structure of a large graph, measure connectivity among users and give support to visual search and automatic identification of organizations. To do so, LogAnalysis integrates graphical representation of network elements with measures typical of Social Network Analysis (SNA) in order to help detectives or forensic analysts to systematically examine relationships. The analysis of data extracted from mobile phone traffic logs has a fundamental relevance in forensic investigations since it allows to unveil the structure of relationships among individuals suspected to be part of criminal organizations together with the role they play inside the organization itself. To this purpose, the Social Network Analysis (SNA) methods were heavily employed in order to understand the importance of relationships. Interpretation and visual exploration of graphs representing phone contacts over a given time interval may become demanding, due to the presence of numerous nodes and edges. Our main contribution is an interface that enables systematic analysis of social relationships using visual different techniques and statistical information. LogAnalysis allows a deeper and clearer understanding of criminal associations while evidencing key members inside the criminal ring, and or those working as link among different associations" ] }
1011.5549
2950552904
We present new and improved data structures that answer exact node-to-node distance queries in planar graphs. Such data structures are also known as distance oracles. For any directed planar graph on n nodes with non-negative lengths we obtain the following: * Given a desired space allocation @math , we show how to construct in @math time a data structure of size @math that answers distance queries in @math time per query. As a consequence, we obtain an improvement over the fastest algorithm for k-many distances in planar graphs whenever @math . * We provide a linear-space exact distance oracle for planar graphs with query time @math for any constant eps>0. This is the first such data structure with provable sublinear query time. * For edge lengths at least one, we provide an exact distance oracle of space @math such that for any pair of nodes at distance D the query time is @math . Comparable query performance had been observed experimentally but has never been explained theoretically. Our data structures are based on the following new tool: given a non-self-crossing cycle C with @math nodes, we can preprocess G in @math time to produce a data structure of size @math that can answer the following queries in @math time: for a query node u, output the distance from u to all the nodes of C. This data structure builds on and extends a related data structure of Klein (SODA'05), which reports distances to the boundary of a face, rather than a cycle. The best distance oracles for planar graphs until the current work are due to Cabello (SODA'06), Djidjev (WG'96), and Fakcharoenphol and Rao (FOCS'01). For @math and space @math , we essentially improve the query time from @math to @math .
For exact shortest-path queries, the currently best result in terms of the tradeoff between space and query time is by Fakcharoenphol and Rao @cite_15 . Their data structure of space @math can be constructed in time @math and processes queries in time @math . The preprocessing time can be improved by a logarithmic factor @cite_42 . (Applications to distance oracles are not explicitly stated in @cite_42 .)
{ "cite_N": [ "@cite_15", "@cite_42" ], "mid": [ "2768149714", "1988593903" ], "abstract": [ "In this paper, we present an O(n log3 n) time algorithm for finding shortest paths in an n-node planar graph with real weights. This can be compared to the best previous strongly polynomial time algorithm developed by Lipton, Rose, and Tarjan in 1978 which runs in O(n3 2) time, and the best polynomial time algorithm developed by Henzinger, Klein, Subramanian, and Rao in 1994 which runs in O(n 4 3) time. We also present significantly improved data structures for reporting distances between pairs of nodes and algorithms for updating the data structures when edge weights change.", "We give an O(n log2 n)-time, linear-space algorithm that, given a directed planar graph with positive and negative arc-lengths, and given a node s, finds the distances from s to all nodes." ] }
1011.5549
2950552904
We present new and improved data structures that answer exact node-to-node distance queries in planar graphs. Such data structures are also known as distance oracles. For any directed planar graph on n nodes with non-negative lengths we obtain the following: * Given a desired space allocation @math , we show how to construct in @math time a data structure of size @math that answers distance queries in @math time per query. As a consequence, we obtain an improvement over the fastest algorithm for k-many distances in planar graphs whenever @math . * We provide a linear-space exact distance oracle for planar graphs with query time @math for any constant eps>0. This is the first such data structure with provable sublinear query time. * For edge lengths at least one, we provide an exact distance oracle of space @math such that for any pair of nodes at distance D the query time is @math . Comparable query performance had been observed experimentally but has never been explained theoretically. Our data structures are based on the following new tool: given a non-self-crossing cycle C with @math nodes, we can preprocess G in @math time to produce a data structure of size @math that can answer the following queries in @math time: for a query node u, output the distance from u to all the nodes of C. This data structure builds on and extends a related data structure of Klein (SODA'05), which reports distances to the boundary of a face, rather than a cycle. The best distance oracles for planar graphs until the current work are due to Cabello (SODA'06), Djidjev (WG'96), and Fakcharoenphol and Rao (FOCS'01). For @math and space @math , we essentially improve the query time from @math to @math .
Efficient data structures for shortest-path queries have also been devised for restricted classes of planar graphs @cite_37 @cite_57 and for restricted types of queries @cite_29 @cite_50 @cite_6 @cite_17 . If approximate distances and shortest paths are sufficient, a better tradeoff with @math space and @math query time has been achieved @cite_48 @cite_26 @cite_17 @cite_32 .
{ "cite_N": [ "@cite_37", "@cite_26", "@cite_48", "@cite_29", "@cite_32", "@cite_6", "@cite_57", "@cite_50", "@cite_17" ], "mid": [ "2094724468", "2003457914", "2096304547", "2092904196", "1623319572", "2060296641", "1985248597", "2068243119", "2078443814" ], "abstract": [ "We describe algorithms for finding shortest paths and distances in outerplanar and planar digraphs that exploit the particular topology of the input graph. An important feature of our algorithms is that they can work in a dynamic environment, where the cost of any edge can be changed or the edge can be deleted. In the case of outerplanar digraphs, our data structures can be updated after any such change in only logarithmic time. A distance query is also answered in logarithmic time. In the case of planar digraphs, we give an interesting tradeoff between preprocessing, query, and update times depending on the value of a certain topological parameter of the graph. Our results can be extended to n -vertex digraphs of genus O(n 1-e ) for any e>0 .", "We describe a method for preprocessing a weighted planar undirected graph and representing the results of the preprocessing so as to facilitate subsequent approximate distance queries. For any 0 2 n distances. By using compressed representation of the distances, the number of bytes required is about .5ne-1(9 + 3 log e-1)log 2 n (at the expense of a small increase in query time).", "It is shown that a planar digraph can be preprocessed in near-linear time, producing a near-linear space oracle that can answer reachability queries in constant time. The oracle can be distributed as an O(log n) space label for each vertex and then we can determine if one vertex can reach another considering their two labels only.The approach generalizes to give a near-linear space approximate distances oracle for a weighted planar digraph. With weights drawn from 0, …, N , it approximates distances within a factor (1 + e) in O(log log (nN) + 1 e) time. Our scheme can be extended to find and route along correspondingly short dipaths.", "", "A (1+e)-approximate distance oracle for a graph is a data structure that supports approximate point-to-point shortest-path-distance queries. The most relevant measures for a distance-oracle construction are: space, query time, and preprocessing time. There are strong distance-oracle constructions known for planar graphs (Thorup, JACM'04) and, subsequently, minor-excluded graphs (Abraham and Gavoille, PODC'06). However, these require Ω(e-1n lg n) space for n-node graphs. In this paper, for planar graphs, bounded-genus graphs, and minor-excluded graphs we give distance-oracle constructions that require only O(n) space. The big O hides only a fixed constant, independent of e and independent of genus or size of an excluded minor. The preprocessing times for our distance oracle are also faster than those for the previously known constructions. For planar graphs, the preprocessing time is O(nlg2 n). However, our constructions have slower query times. For planar graphs, the query time is O(e-2 lg2 n). For all our linear-space results, we can in fact ensure, for any δ > 0, that the space required is only 1 + δ times the space required just to represent the graph itself.", "Weighted paths in directed grid graphs of dimension (m X n) can be used to model the string edit problem, which consists of obtaining optimal (weighted) alignments between substrings of A, |A|=m, and substrings of B, |B|=n. We build a data structure (in O(mn log m) time) that supports O(log m) time queries about the weight of any of the O(m2n) best paths from the vertices in column 0 of the graph to all other vertices. Using these techniques we present a simple O(n2 log n) time and @math space algorithm to find all (the locally optimal) approximate tandem (or nontandem) repeats xy within a string of size n. This improves (by a factor of log n) upon several previous algorithms for this problem and is the first algorithm to find all locally optimal repeats. For edit graphs with weights in 0, -1, 1 , a slight modification of our techniques yields an O(n2) algorithm for the cyclic string comparison problem, as compared to O(n2 log n) for the case of general weights.", "The problem of processing shortest pa th queries in graphs arises in application areas such as intelligent t ranspor ta t ion system (ITS), geographic information system (GIS), and robotics. In this paper, we present an efficient algorithmic solution for processing shortest pa th queries in undirected planar graphs with non-negative edge weights. Previous algorithms for this problem on planar graphs all have a tradeoff between the query t ime and the da ta structure space used by the solutions. The previously best known trade-off on an n-vertex planar graph G in the worst case is O(v -r) t ime for a pa th length query and O(v -r + L) t ime for report ing an actual shortest path, with a da ta structure of O(n2 x ) space, where r is an integer parameter with 1 < r _< n and L is the number of edges on the output shortest path. We present a new scheme, called frame search, that exploits the graph planari ty in a novel fashion. Wi th this scheme, we build an improved da ta structure of O(n + pv -r + p2 r) space (in O(n + p2 v + pr 3 4) time), where p is the minimum cardinality of a face-covering of the planar embedding of G with 1 < p < O(n), and r is an integer parameter with 1 < r < p. This da ta structure enables us to process each length query in O(v -rlog r + a(n)) t ime and report an actual shortest path in an addit ional O(L) time, where a(n) is the inverse of Ackermann's function. In the worst case, our approach reduces the previously best query t ime by a factor of up to O(n 1 4) if the same amount of space is used. Our technique can also be applied to improve the previous query algorithms for some geometric shortest pa th problems on the plane.", "We present a new approach for answering short path queries in planar graphs. For any fixed constant k and a given unweighted planar graph G e (V, E), one can build in O(vVv) time a data structure, which allows to check in O(1) time whether two given vertices are at distance at most k in G and if so a shortest path between them is returned. Graph G can be undirected as well as directed.Our data structure works in fully dynamic environment. It can be updated in O(1) time after removing an edge or a vertex while updating after an edge insertion takes polylogarithmic amortized time. Besides deleting elements one can also disable ones for some time. It is motivated by a practical situation where nodes or links of a network may be temporarily out of service.Our results can be easily generalized to other wide classes of graphs---for instance we can take any minor-closed family of graphs.", "Given an n-node planar graph with nonnegative edge-lengths, our algorithm takes O(n log n) time to construct a data structure that supports queries of the following form in O(log n) time: given a destination node t on the boundary of the infinite face, and given a start node s anywhere, find the s-to-t distance." ] }
1011.5549
2950552904
We present new and improved data structures that answer exact node-to-node distance queries in planar graphs. Such data structures are also known as distance oracles. For any directed planar graph on n nodes with non-negative lengths we obtain the following: * Given a desired space allocation @math , we show how to construct in @math time a data structure of size @math that answers distance queries in @math time per query. As a consequence, we obtain an improvement over the fastest algorithm for k-many distances in planar graphs whenever @math . * We provide a linear-space exact distance oracle for planar graphs with query time @math for any constant eps>0. This is the first such data structure with provable sublinear query time. * For edge lengths at least one, we provide an exact distance oracle of space @math such that for any pair of nodes at distance D the query time is @math . Comparable query performance had been observed experimentally but has never been explained theoretically. Our data structures are based on the following new tool: given a non-self-crossing cycle C with @math nodes, we can preprocess G in @math time to produce a data structure of size @math that can answer the following queries in @math time: for a query node u, output the distance from u to all the nodes of C. This data structure builds on and extends a related data structure of Klein (SODA'05), which reports distances to the boundary of a face, rather than a cycle. The best distance oracles for planar graphs until the current work are due to Cabello (SODA'06), Djidjev (WG'96), and Fakcharoenphol and Rao (FOCS'01). For @math and space @math , we essentially improve the query time from @math to @math .
Based on separators, geometric properties, and other characteristics such as highway structures, many efficient practical methods have been devised @cite_31 @cite_30 @cite_8 , their time and space complexities are however difficult to analyze. Competitive worst-case bounds have been achieved under the assumption that actual road networks have small highway dimension @cite_41 @cite_35 . While our preprocessing algorithm (Theorem ) runs in almost linear time, some of the problems that appear in the preprocessing stage of practical route planning methods have recently been proven to be NP-hard @cite_58 @cite_11 .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_8", "@cite_41", "@cite_31", "@cite_58", "@cite_11" ], "mid": [ "1493214567", "85521454", "2083019227", "", "2148831787", "1505815330", "1594459318" ], "abstract": [ "We present a new speedup technique for route planning that exploits the hierarchy inherent in real world road networks. Our algorithm preprocesses the eight digit number of nodes needed for maps of the USA or Western Europe in a few hours using linear space. Shortest (i.e. fastest) path queries then take around eight milliseconds to produce exact shortest paths. This is about 2 000 times faster than using Dijkstra’s algorithm.", "We explore the relationship between VC-dimension and graph algorithm design. In particular, we show that set systems induced by sets of vertices on shortest paths have VC-dimension at most two. This allows us to use a result from learning theory to improve time bounds on query algorithms for the point-to-point shortest path problem in networks of low highway dimension, such as road networks. We also refine the definitions of highway dimension and related concepts, making them more general and potentially more relevant to practice. In particular, we define highway dimension in terms of set systems induced by shortest paths, and give cardinality-based and average case definitions.", "When you drive to somewhere far away, you will leave your current location via one of only a few important traffic junctions. Starting from this informal observation, we developed an algorithmic approach, transit node routing, that allows us to reduce quickest path queries in road networks to a small number of table lookups. For road maps of Western Europe and the United States, our best query times improved over the best previously published figures by two orders of magnitude. This is also more than one million times faster than the best known algorithm for general networks.", "", "We present a route planning technique solely based on the concept of node contraction. The nodes are first ordered by 'importance'. A hierarchy is then generated by iteratively contracting the least important node. Contracting a node υ means replacing shortest paths going through v by shortcuts. We obtain a hierarchical query algorithm using bidirectional shortest-path search. The forward search uses only edges leading to more important nodes and the backward search uses only edges coming from more important nodes. For fastest routes in road networks, the graph remains very sparse throughout the contraction process using rather simple heuristics for ordering the nodes. We have five times lower query times than the best previous hierarchical Dijkstra-based speedup techniques and a negative space overhead, i.e., the data structure for distance computation needs less space than the input graph. CHs can be combined with many other route planning techniques, leading to improved performance for many-to-many routing, transit-node routing, goal-directed routing or mobile and dynamic scenarios.", "During the last years, speed-up techniques for Dijkstra 's algorithm have been developed that make the computation of shortest paths a matter of microseconds even on huge road networks. The most sophisticated methods enhance the graph by inserting shortcuts , i.e. additional edges, that represent shortest paths in the graph. Until now, all existing shortcut-insertion strategies are heuristics and no theoretical results on the topic are known. In this work, we formalize the problem of adding shortcuts as a graph augmentation problem, study the algorithmic complexity of the problem, give approximation algorithms and show how to stochastically evaluate a given shortcut assignment on graphs that are too big to evaluate it exactly.", "During the last years, preprocessing-based techniques have been developed to compute shortest paths between two given points in a road network. These speed-up techniques make the computation a matter of microseconds even on huge networks. While there is a vast amount of experimental work in the field, there is still large demand on theoretical foundations. The preprocessing phases of most speed-up techniques leave open some degree of freedom which, in practice, is filled in a heuristical fashion. Thus, for a given speed-up technique, the problem arises of how to fill the according degree of freedom optimally. Until now, the complexity status of these problems was unknown. In this work, we answer this question by showing NP-hardness for the recent techniques." ] }
1011.5568
2949403782
Understanding and modelling resources of Internet end hosts is essential for the design of desktop software and Internet-distributed applications. In this paper we develop a correlated resource model of Internet end hosts based on real trace data taken from the SETI@home project. This data covers a 5-year period with statistics for 2.7 million hosts. The resource model is based on statistical analysis of host computational power, memory, and storage as well as how these resources change over time and the correlations between them. We find that resources with few discrete values (core count, memory) are well modeled by exponential laws governing the change of relative resource quantities over time. Resources with a continuous range of values are well modeled with either correlated normal distributions (processor speed for integer operations and floating point operations) or log-normal distributions (available disk space). We validate and show the utility of the models by applying them to a resource allocation problem for Internet-distributed applications, and demonstrate their value over other models. We also make our trace data and tool for automatically generating realistic Internet end hosts publicly available.
Some previous works investigated modelling clusters or computational Grids @cite_10 @cite_25 @cite_7 . These works differ from ours in terms of the resource focus of the model, the host heterogeneity and the evolution and correlation of resources over time. Also, most Grid resource models are based on data from many years ago and may no longer be valid for present configurations.
{ "cite_N": [ "@cite_10", "@cite_25", "@cite_7" ], "mid": [ "2121294889", "", "2134778861" ], "abstract": [ "Understanding large Grid platform configurations and generating representative synthetic configurations is critical for Grid computing research. This paper presents an analysis of existing resource configurations and proposes a Grid platform generator that synthesizes realistic configurations of both computing and communication resources. Our key contributions include the development of statistical models for currently deployed resources and using these estimates for modeling the characteristics of future systems. Through the analysis of the configurations of 114 clusters and over 10,000 processors, we identify appropriate distributions for resource configuration parameters in many typical clusters. Using well-established statistical tests, we validate our models against a second resource collection of 191 clusters and over 10,000 processors, and show that our models effectively capture the resource characteristics found in real world resource infrastructures. These models are realized in a resource generator, which can be easily recalibrated by running it on a training sample set.", "", "Realistic workloads are essential in evaluating middleware for computational grids. One important component is the raw grid itself: a network topology graph annotated with the hardware and software available on each node and link. This paper defines our requirements for grid generation and presents GridG, our extensible generator. We describe GridG in two steps: topology generation and annotation. For topology generation, we have both model and mechanism. We extend Tiers, an existing tool from the networking community, to produce graphs that obey recently discovered power laws of Internet topology. We also contribute to network topology theory by illustrating a contradiction between two laws and proposing a new version of one of them. For annotation, GridG captures intra- and inter-host correlations between attributes using conditional probability rules. We construct a set of rules, including one based on empirical evidence of OS concentration in subnets, that produce sensible host annotations." ] }
1011.5568
2949403782
Understanding and modelling resources of Internet end hosts is essential for the design of desktop software and Internet-distributed applications. In this paper we develop a correlated resource model of Internet end hosts based on real trace data taken from the SETI@home project. This data covers a 5-year period with statistics for 2.7 million hosts. The resource model is based on statistical analysis of host computational power, memory, and storage as well as how these resources change over time and the correlations between them. We find that resources with few discrete values (core count, memory) are well modeled by exponential laws governing the change of relative resource quantities over time. Resources with a continuous range of values are well modeled with either correlated normal distributions (processor speed for integer operations and floating point operations) or log-normal distributions (available disk space). We validate and show the utility of the models by applying them to a resource allocation problem for Internet-distributed applications, and demonstrate their value over other models. We also make our trace data and tool for automatically generating realistic Internet end hosts publicly available.
The closest work described in @cite_15 gives a general characterization of Internet host resources. However, statistical models are not provided, and the evolution and dynamics of Internet resources are not investigated. Also, certain hardware attributes (such as cores) are not characterized or modeled due to the technology available at that time.
{ "cite_N": [ "@cite_15" ], "mid": [ "2122974233" ], "abstract": [ "\"Volunteer computing\" uses Internet-connected computers, volunteered by their owners, as a source of computing power and storage. This paper studies the potential capacity of volunteer computing. We analyzed measurements of over 330,000 hosts participating in a volunteer computing project. These measurements include processing power, memory, disk space, network throughput, host availability, userspecified limits on resource usage, and host churn. We show that volunteer computing can support applications that are significantly more data-intensive, or have larger memory and storage requirements, than those in current projects." ] }
1011.5599
2950720108
The neighbourhood function N(t) of a graph G gives, for each t, the number of pairs of nodes such that y is reachable from x in less that t hops. The neighbourhood function provides a wealth of information about the graph (e.g., it easily allows one to compute its diameter), but it is very expensive to compute it exactly. Recently, the ANF algorithm (approximate neighbourhood function) has been proposed with the purpose of approximating NG(t) on large graphs. We describe a breakthrough improvement over ANF in terms of speed and scalability. Our algorithm, called HyperANF, uses the new HyperLogLog counters and combines them efficiently through broadword programming; our implementation uses overdecomposition to exploit multi-core parallelism. With HyperANF, for the first time we can compute in a few hours the neighbourhood function of graphs with billions of nodes with a small error and good confidence using a standard workstation. Then, we turn to the study of the distribution of the shortest paths between reachable nodes (that can be efficiently approximated by means of HyperANF), and discover the surprising fact that its index of dispersion provides a clear-cut characterisation of proper social networks vs. web graphs. We thus propose the spid (Shortest-Paths Index of Dispersion) of a graph as a new, informative statistics that is able to discriminate between the above two types of graphs. We believe this is the first proposal of a significant new non-local structural index for complex networks whose computation is highly scalable.
HyperANF is an evolution of ANF @cite_12 , which is implemented by the tool snap . We will give some timing comparison with snap , but we can only do it for relatively small networks, as the large memory footprint of snap precludes application to large graphs.
{ "cite_N": [ "@cite_12" ], "mid": [ "2040615703" ], "abstract": [ "Graphs are an increasingly important data source, with such important graphs as the Internet and the Web. Other familiar graphs include CAD circuits, phone records, gene sequences, city streets, social networks and academic citations. Any kind of relationship, such as actors appearing in movies, can be represented as a graph. This work presents a data mining tool, called ANF, that can quickly answer a number of interesting questions on graph-represented data, such as the following. How robust is the Internet to failures? What are the most influential database papers? Are there gender differences in movie appearance patterns? At its core, ANF is based on a fast and memory-efficient approach for approximating the complete \"neighbourhood function\" for a graph. For the Internet graph (268K nodes), ANF's highly-accurate approximation is more than 700 times faster than the exact computation. This reduces the running time from nearly a day to a matter of a minute or two, allowing users to perform ad hoc drill-down tasks and to repeatedly answer questions about changing data sources. To enable this drill-down, ANF employs new techniques for approximating neighbourhood-type functions for graphs with distinguished nodes and or edges. When compared to the best existing approximation, ANF's approach is both faster and more accurate, given the same resources. Additionally, unlike previous approaches, ANF scales gracefully to handle disk resident graphs. Finally, we present some of our results from mining large graphs using ANF." ] }
1011.5599
2950720108
The neighbourhood function N(t) of a graph G gives, for each t, the number of pairs of nodes such that y is reachable from x in less that t hops. The neighbourhood function provides a wealth of information about the graph (e.g., it easily allows one to compute its diameter), but it is very expensive to compute it exactly. Recently, the ANF algorithm (approximate neighbourhood function) has been proposed with the purpose of approximating NG(t) on large graphs. We describe a breakthrough improvement over ANF in terms of speed and scalability. Our algorithm, called HyperANF, uses the new HyperLogLog counters and combines them efficiently through broadword programming; our implementation uses overdecomposition to exploit multi-core parallelism. With HyperANF, for the first time we can compute in a few hours the neighbourhood function of graphs with billions of nodes with a small error and good confidence using a standard workstation. Then, we turn to the study of the distribution of the shortest paths between reachable nodes (that can be efficiently approximated by means of HyperANF), and discover the surprising fact that its index of dispersion provides a clear-cut characterisation of proper social networks vs. web graphs. We thus propose the spid (Shortest-Paths Index of Dispersion) of a graph as a new, informative statistics that is able to discriminate between the above two types of graphs. We believe this is the first proposal of a significant new non-local structural index for complex networks whose computation is highly scalable.
Recently, a MapReduce-based distributed implementation of ANF called HADI @cite_7 has been presented. HADI runs on one of the fifty largest supercomputers---the Hadoop cluster M45. The only published data about HADI's performance is the computation of the neighbourhood function of a Kronecker graph with 2 billion links, which required half an hour using 90 machines. HyperANF can compute the same function in .
{ "cite_N": [ "@cite_7" ], "mid": [ "2140030920" ], "abstract": [ "Given large, multimillion-node graphs (e.g., Facebook, Web-crawls, etc.), how do they evolve over time? How are they connected? What are the central nodes and the outliers? In this article we define the Radius plot of a graph and show how it can answer these questions. However, computing the Radius plot is prohibitively expensive for graphs reaching the planetary scale. There are two major contributions in this article: (a) We propose HADI (HAdoop DIameter and radii estimator), a carefully designed and fine-tuned algorithm to compute the radii and the diameter of massive graphs, that runs on the top of the Hadoop MapReduce system, with excellent scale-up on the number of available machines (b) We run HADI on several real world datasets including YahooWeb (6B edges, 1 8 of a Terabyte), one of the largest public graphs ever analyzed. Thanks to HADI, we report fascinating patterns on large networks, like the surprisingly small effective diameter, the multimodal bimodal shape of the Radius plot, and its palindrome motion over time." ] }
1011.5599
2950720108
The neighbourhood function N(t) of a graph G gives, for each t, the number of pairs of nodes such that y is reachable from x in less that t hops. The neighbourhood function provides a wealth of information about the graph (e.g., it easily allows one to compute its diameter), but it is very expensive to compute it exactly. Recently, the ANF algorithm (approximate neighbourhood function) has been proposed with the purpose of approximating NG(t) on large graphs. We describe a breakthrough improvement over ANF in terms of speed and scalability. Our algorithm, called HyperANF, uses the new HyperLogLog counters and combines them efficiently through broadword programming; our implementation uses overdecomposition to exploit multi-core parallelism. With HyperANF, for the first time we can compute in a few hours the neighbourhood function of graphs with billions of nodes with a small error and good confidence using a standard workstation. Then, we turn to the study of the distribution of the shortest paths between reachable nodes (that can be efficiently approximated by means of HyperANF), and discover the surprising fact that its index of dispersion provides a clear-cut characterisation of proper social networks vs. web graphs. We thus propose the spid (Shortest-Paths Index of Dispersion) of a graph as a new, informative statistics that is able to discriminate between the above two types of graphs. We believe this is the first proposal of a significant new non-local structural index for complex networks whose computation is highly scalable.
The rather complete survey of related literature in @cite_7 shows that essentially no data mining tool was able before ANF to approximate the neighbourhood function of very large graphs reliably. A remarkable exception is Cohen's work @cite_10 , which provides strong theoretical guarantees but experimentally turns out to be not as scalable as the ANF approach; it is worth noting, though, that one of the proposed applications of @cite_10 () is structurally identical to ANF.
{ "cite_N": [ "@cite_10", "@cite_7" ], "mid": [ "1965996575", "2140030920" ], "abstract": [ "Computing the transitive closure in directed graphs is a fundamental graph problem. We consider the more restricted problem of computing the number of nodes reachable from every node and the size of the transitive closure. The fastest known transitive closure algorithms run inO(min mn,n2.38 ) time, wherenis the number of nodes andmthe number of edges in the graph. We present anO(m) time randomized (Monte Carlo) algorithm that estimates, with small relative error, the sizes of all reachability sets and the transitive closure. Another ramification of our estimation scheme is a O(m) time algorithm for estimating sizes of neighborhoods in directed graphs with nonnegative edge lengths. Our size-estimation algorithms are much faster than performing the respective explicit computations.", "Given large, multimillion-node graphs (e.g., Facebook, Web-crawls, etc.), how do they evolve over time? How are they connected? What are the central nodes and the outliers? In this article we define the Radius plot of a graph and show how it can answer these questions. However, computing the Radius plot is prohibitively expensive for graphs reaching the planetary scale. There are two major contributions in this article: (a) We propose HADI (HAdoop DIameter and radii estimator), a carefully designed and fine-tuned algorithm to compute the radii and the diameter of massive graphs, that runs on the top of the Hadoop MapReduce system, with excellent scale-up on the number of available machines (b) We run HADI on several real world datasets including YahooWeb (6B edges, 1 8 of a Terabyte), one of the largest public graphs ever analyzed. Thanks to HADI, we report fascinating patterns on large networks, like the surprisingly small effective diameter, the multimodal bimodal shape of the Radius plot, and its palindrome motion over time." ] }
1011.4135
1936083722
To harness the ever growing capacity and decreasing cost of storage, providing an abstraction of dependable storage in the presence of crash-stop and Byzantine failures is compulsory. We propose a decentralized Reed Solomon coding mechanism with minimum communication overhead. Using a progressive data retrieval scheme, a data collector contacts only the necessary number of storage nodes needed to guarantee data integrity. The scheme gracefully adapts the cost of successful data retrieval to the number of storage node failures. Moreover, by leveraging the Welch-Berlekamp algorithm, it avoids unnecessary computations. Compared to the state-of-the-art decoding scheme, the implementation and evaluation results show that our progressive data retrieval scheme has up to 35 times better computation performance for low Byzantine node rates. Additionally, the communication cost in data retrieval is derived analytically and corroborated by Monte-Carlo simulation results. Our implementation is flexible in that the level of redundancy it provides is independent of the number of data generating nodes, a requirement for distributed storage systems
RS codes are the most well-known class of MDS codes. They not only can recover data when nodes fail, but can guarantee recovery when a certain subset of nodes are Byzantine. RS codes operate on symbols of @math bits. An @math RS code is a linear code, with each symbol in @math , and parameters @math and @math where @math is the total number of symbols in a codeword, @math is the total number of information symbols, and @math is the symbol-error-correcting capability of the code. * Encoding Let the sequence of @math information symbols in @math be @math and @math be the information polynomial of @math represented as @math The codeword polynomial, @math , corresponding to @math can be encoded as @math where @math is a generator polynomial of the RS code. It is well-known that @math can be obtained as where @math is a primitive element in @math , @math an arbitrary integer, and @math . * Decoding The decoding process of RS codes is more complex. Complete description of decoding of RS codes can be found in @cite_6 .
{ "cite_N": [ "@cite_6" ], "mid": [ "58580312" ], "abstract": [ "Preface. List of Program Files. List of Laboratory Exercises. List of Algorithms. List of Figures. List of Tables. List of Boxes. PART I: INTRODUCTION AND FOUNDATIONS. 1. A Context for Error Correcting Coding. PART II: BLOCK CODES. 2. Groups and Vector Spaces. 3. Linear Block Codes. 4. Cyclic Codes, Rings, and Polynomials. 5. Rudiments of Number Theory and Algebra. 6. BCH and Reed-Solomon Codes: Designer Cyclic Codes. 7. Alternate Decoding Algorithms for Reed-Solomon Codes. 8. Other Important Block Codes. 9. Bounds on Codes. 10. Bursty Channels, Interleavers, and Concatenation. 11. Soft-Decision Decoding Algorithms. PART III: CODES ON GRAPHS. 12. Convolution Codes. 13. Trefils Coded Modulation. PART IV: INTERATIVELY DECODED CODES. 14. Turbo Codes. 15. Low-Density Parity-Check Codes. 16. Decoding Algorithms on Graphs. PART V: SPACE-TIME CODING. 17. Fading Channels and Space-Time Coding. References. Index." ] }
1011.4135
1936083722
To harness the ever growing capacity and decreasing cost of storage, providing an abstraction of dependable storage in the presence of crash-stop and Byzantine failures is compulsory. We propose a decentralized Reed Solomon coding mechanism with minimum communication overhead. Using a progressive data retrieval scheme, a data collector contacts only the necessary number of storage nodes needed to guarantee data integrity. The scheme gracefully adapts the cost of successful data retrieval to the number of storage node failures. Moreover, by leveraging the Welch-Berlekamp algorithm, it avoids unnecessary computations. Compared to the state-of-the-art decoding scheme, the implementation and evaluation results show that our progressive data retrieval scheme has up to 35 times better computation performance for low Byzantine node rates. Additionally, the communication cost in data retrieval is derived analytically and corroborated by Monte-Carlo simulation results. Our implementation is flexible in that the level of redundancy it provides is independent of the number of data generating nodes, a requirement for distributed storage systems
The basic procedure of RS decoding is shown in Figure . The last step of the decoding procedure involves solving a linear set of equations, and can be made efficient by the use of Vandermonde generator matrices @cite_14 .
{ "cite_N": [ "@cite_14" ], "mid": [ "2432517183" ], "abstract": [ "From the Publisher: This is the revised and greatly expanded Second Edition of the hugely popular Numerical Recipes: The Art of Scientific Computing. The product of a unique collaboration among four leading scientists in academic research and industry, Numerical Recipes is a complete text and reference book on scientific computing. In a self-contained manner it proceeds from mathematical and theoretical considerations to actual practical computer routines. With over 100 new routines (now well over 300 in all), plus upgraded versions of many of the original routines, this book is more than ever the most practical, comprehensive handbook of scientific computing available today. The book retains the informal, easy-to-read style that made the first edition so popular, with many new topics presented at the same accessible level. In addition, some sections of more advanced material have been introduced, set off in small type from the main body of the text. Numerical Recipes is an ideal textbook for scientists and engineers and an indispensable reference for anyone who works in scientific computing. Highlights of the new material include a new chapter on integral equations and inverse methods; multigrid methods for solving partial differential equations; improved random number routines; wavelet transforms; the statistical bootstrap method; a new chapter on \"less-numerical\" algorithms including compression coding and arbitrary precision arithmetic; band diagonal linear systems; linear algebra on sparse matrices; Cholesky and QR decomposition; calculation of numerical derivatives; Pade approximants, and rational Chebyshev approximation; new special functions; Monte Carlo integration in high-dimensional spaces; globally convergent methods for sets of nonlinear equations; an expanded chapter on fast Fourier methods; spectral analysis on unevenly sampled data; Savitzky-Golay smoothing filters; and two-dimensional Kolmogorov-Smirnoff tests. All this is in addition to material on such basic top" ] }
1011.4135
1936083722
To harness the ever growing capacity and decreasing cost of storage, providing an abstraction of dependable storage in the presence of crash-stop and Byzantine failures is compulsory. We propose a decentralized Reed Solomon coding mechanism with minimum communication overhead. Using a progressive data retrieval scheme, a data collector contacts only the necessary number of storage nodes needed to guarantee data integrity. The scheme gracefully adapts the cost of successful data retrieval to the number of storage node failures. Moreover, by leveraging the Welch-Berlekamp algorithm, it avoids unnecessary computations. Compared to the state-of-the-art decoding scheme, the implementation and evaluation results show that our progressive data retrieval scheme has up to 35 times better computation performance for low Byzantine node rates. Additionally, the communication cost in data retrieval is derived analytically and corroborated by Monte-Carlo simulation results. Our implementation is flexible in that the level of redundancy it provides is independent of the number of data generating nodes, a requirement for distributed storage systems
Several XOR-based erasure codes (in a field of GF(2)) @cite_9 @cite_4 @cite_8 @cite_16 have been used in storage systems. In RAID-6 systems, each disk is partitioned into strips of fixed size. Two parity strips are computed using one strip from each data disk, forming a stripe together with the data strips. EVEN-ODD @cite_8 , Row Diagonal Parity (RDP) @cite_9 , and Minimal Density RAID-6 codes @cite_4 use XOR operations, and are specific to RAID-6. A detailed comparison of the encoding and decoding performance of several open-source erasure coding libraries for storage is provided @cite_10 . We mention that the gain in computation efficiency of XOR-based erasure codes is achieved by trading off fault tolerance. Our progressive data retrieval algorithm--however-- can tolerate as much fault--according to the configuration of the code's robustness--as is needed is highly efficient both in computation and communication costs. Moreover, RAID-6 systems can recover from the loss of exactly two disks but cannot handle Byzantine failures, thereby eliminating the application of such systems for sensor networks.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_9", "@cite_16", "@cite_10" ], "mid": [ "2104812861", "201688508", "1622719107", "2164328519", "130792636" ], "abstract": [ "Let F sub q denote the finite field GF(q) and let h be a positive integer. MDS (maximum distance separable) codes over the symbol alphabet F sub q sup b are considered that are linear over F sub q and have sparse (\"low-density\") parity-check and generator matrices over F sub q that are systematic over F sub q sup b . Lower bounds are presented on the number of nonzero elements in any systematic parity-check or generator matrix of an F sub q -linear MDS code over F sub q sup b , along with upper bounds on the length of any MDS code that attains those lower bounds. A construction is presented that achieves those bounds for certain redundancy values. The building block of the construction is a set of sparse nonsingular matrices over F sub q whose pairwise differences are also nonsingular. Bounds and constructions are presented also for the case where the systematic condition on the parity-check and generator matrices is relaxed to be over F sub q , rather than over F sub q sup b .", "An improved musical instrument strap attaching, holding, and supporting device and method for supporting, for example, guitars by slitted straps utilizing uniquely shaped and designed retaining devices. The novel attaching, holding and supporting device is usually located at the bottom end of the guitar body for all types of guitars and also near the neck of the guitar for electic guitars (FIG. 1). The device includes an attachment wedge, usually a screw for electric guitars or wooden wedge for either \"F hole\" or folk or classic guitars, and a central stem portion which is cylindrical in shape which mates with the attachment wedge on one end and a strap retaining head on the other end. The strap retaining head is elongated at one end, forming a generally isosceles triangular shape with curved corners, similar to that of a plectrum, and has a hemispherical projection on its inner side facing the guitar body the combination being used to support the body of the guitar by a shoulder strap or sling placed between the guitar body and the strap retainer and connected by friction and weight to the shoulder of the person playing the guitar and, in the case of \"F Hole\" or folk or classic guitars, to the neck of the guitar by other means such as a string (FIG. 2). The elongated tip of the retaining head is initially inserted into the slit of the strap in a lateral direction and then rotated 90 DEG . The longest dimension of the retainer head is preferrably greater than the length of the slit, and the distance between the tip of the hemispherical projection and the bottom of the central stem is preferrably less than the thickness of the strap.", "Row-Diagonal Parity (RDP) is a new algorithm for protecting against double disk failures. It stores all data unencoded, and uses only exclusive-or operations to compute parity. RDP is provably optimal in computational complexity, both during construction and reconstruction. Like other algorithms, it is optimal in the amount of redundant information stored and accessed. RDP works within a single stripe of blocks of sizes normally used by file systems, databases and disk arrays. It can be utilized in a fixed (RAID-4) or rotated (RAID-5) parity placement style. It is possible to extend the algorithm to encompass multiple RAID-4 or RAID-5 disk arrays in a single RDP disk array. It is possible to add disks to an existing RDP array without recalculating parity or moving data. Implementation results show that RDP performance can be made nearly equal to single parity RAID-4 and RAID-5 performance.", "It may not be feasible for sensor networks monitoring nature and inaccessible geographical regions to include powered sinks with Internet connections. We consider the scenario where sinks are not present in large-scale sensor networks, and unreliable sensors have to collectively resort to storing sensed data over time on themselves. At a time of convenience, such cached data from a small subset of live sensors may be collected by a centralized (possibly mobile) collector. In this paper, we propose a decentralized algorithm using fountain codes to guarantee the persistence and reliability of cached data on unreliable sensors. With fountain codes, the collector is able to recover all data as long as a sufficient number of sensors are alive. We use random walks to disseminate data from a sensor to a random subset of sensors in the network. Our algorithms take advantage of the low decoding complexity of fountain codes, as well as the scalability of the dissemination process via random walks. We have proposed two algorithms based on random walks. Our theoretical analysis and simulation-based studies have shown that, the first algorithm maintains the same level of fault tolerance as the original centralized fountain code, while introducing lower overhead than naive random-walk based implementation in the dissemination process. Our second algorithm has lower level of fault tolerance than the original centralized fountain code, but consumes much lower dissemination cost.", "Over the past five years, large-scale storage installations have required fault-protection beyond RAID-5, leading to a flurry of research on and development of erasure codes for multiple disk failures. Numerous open-source implementations of various coding techniques are available to the general public. In this paper, we perform a head-to-head comparison of these implementations in encoding and decoding scenarios. Our goals are to compare codes and implementations, to discern whether theory matches practice, and to demonstrate how parameter selection, especially as it concerns memory, has a significant impact on a code's performance. Additional benefits are to give storage system designers an idea of what to expect in terms of coding performance when designing their storage systems, and to identify the places where further erasure coding research can have the most impact." ] }
1011.4135
1936083722
To harness the ever growing capacity and decreasing cost of storage, providing an abstraction of dependable storage in the presence of crash-stop and Byzantine failures is compulsory. We propose a decentralized Reed Solomon coding mechanism with minimum communication overhead. Using a progressive data retrieval scheme, a data collector contacts only the necessary number of storage nodes needed to guarantee data integrity. The scheme gracefully adapts the cost of successful data retrieval to the number of storage node failures. Moreover, by leveraging the Welch-Berlekamp algorithm, it avoids unnecessary computations. Compared to the state-of-the-art decoding scheme, the implementation and evaluation results show that our progressive data retrieval scheme has up to 35 times better computation performance for low Byzantine node rates. Additionally, the communication cost in data retrieval is derived analytically and corroborated by Monte-Carlo simulation results. Our implementation is flexible in that the level of redundancy it provides is independent of the number of data generating nodes, a requirement for distributed storage systems
In the context of network storage for wireless sensor networks, randomized linear codes @cite_0 and fountain codes @cite_16 have been applied with the objective that a data collector can retrieve unit data from each of @math data sources by accessing any @math out @math storage nodes, and thus up to @math crash-stop node failures can be tolerated. However, such schemes cannot recover from data modifications in the field. Compared to erasure based solutions, the key distinctions are i) coding is done at the storage nodes rather than at the data source, and ii) each storage node only has unit capacity. Later, we provide a reference implementation of a single data collector problem using the proposed primitives. Our evaluation studies shows that our implementation outperforms the distributed storage scheme based on random linear network coding in almost all metrics.
{ "cite_N": [ "@cite_0", "@cite_16" ], "mid": [ "1966612101", "2164328519" ], "abstract": [ "In this correspondence, we consider the problem of constructing an erasure code for storage over a network when the data sources are distributed. Specifically, we assume that there are n storage nodes with limited memory and k < n sources generating the data. We want a data collector, who can appear anywhere in the network, to query any k storage nodes and be able to retrieve the data. We introduce decentralized erasure codes, which are linear codes with a specific randomized structure inspired by network coding on random bipartite graphs. We show that decentralized erasure codes are optimally sparse, and lead to reduced communication, storage and computation cost over random linear coding.", "It may not be feasible for sensor networks monitoring nature and inaccessible geographical regions to include powered sinks with Internet connections. We consider the scenario where sinks are not present in large-scale sensor networks, and unreliable sensors have to collectively resort to storing sensed data over time on themselves. At a time of convenience, such cached data from a small subset of live sensors may be collected by a centralized (possibly mobile) collector. In this paper, we propose a decentralized algorithm using fountain codes to guarantee the persistence and reliability of cached data on unreliable sensors. With fountain codes, the collector is able to recover all data as long as a sufficient number of sensors are alive. We use random walks to disseminate data from a sensor to a random subset of sensors in the network. Our algorithms take advantage of the low decoding complexity of fountain codes, as well as the scalability of the dissemination process via random walks. We have proposed two algorithms based on random walks. Our theoretical analysis and simulation-based studies have shown that, the first algorithm maintains the same level of fault tolerance as the original centralized fountain code, while introducing lower overhead than naive random-walk based implementation in the dissemination process. Our second algorithm has lower level of fault tolerance than the original centralized fountain code, but consumes much lower dissemination cost." ] }
1011.4654
2949790096
Carrier Sense Multiple Access with Enhanced Collision Avoidance (CSMA ECA) is a distributed MAC protocol that allows collision-free access to the medium in WLAN. The only difference between CSMA ECA and the well-known CSMA CA is that the former uses a deterministic backoff after successful transmissions. Collision-free operation is reached after a transient state during which some collisions may occur. This article shows that the duration of the transient state can be shortened by appropriately setting the contention parameters. Standard absorbing Markov Chain theory can be used to describe the behaviour of the system in the transient state and to predict the expected number of slots to reach the collision-free operation. The article also introduces CSMA E2CA, in which a deterministic backoff is used two consecutive times after a successful transmission. CSMA E2CA converges quicker to collision-free operation and delivers higher performance than CSMA CA in harsh wireless scenarios with high frame error rates. To achieve collision-free operations when the number of contenders is large, it may be necessary to dynamically adjust the contention parameter. The last part of the article suggests an approach for such parameter adjustment which is validated by simulation results.
In WLANs, it is possible to substantially reduce the duration of empty slots thanks to the fact that the propagation times are short and thus we can quickly determine that a slot is empty by simply sensing the channel. Shortening the duration of empty slots increases the efficiency of the MAC protocol. This is the idea underlying CSMA CA. An additional performance improvement can be obtained when the aggressiveness of the contending stations is adjusted as a function of the number of contending stations @cite_3 . Channel observation and advanced filtering techniques can be used to obtain an accurate estimate of the number of contenders and adjust the contention parameters accordingly @cite_4 . It is even possible to get close to the optimal efficiency of CSMA CA without any knowledge of the number of contenders. Since optimal collision probability is an invariant that does not depend on the number of contenders, control theory and a feedback loop can be used to adjust the contention parameters @cite_19 .
{ "cite_N": [ "@cite_19", "@cite_4", "@cite_3" ], "mid": [ "", "2098080835", "2126345586" ], "abstract": [ "", "The performance of the distributed coordination function (DCF) of the IEEE 802.11 protocol has been shown to heavily depend on the number of terminals accessing the distributed medium. The DCF uses a carrier sense multiple access scheme with collision avoidance (CSMA CA), where the backoff parameters are fixed and determined by the standard. While those parameters were chosen to provide a good protocol performance, they fail to provide an optimum utilization of the channel in many scenarios. In particular, under heavy load scenarios, the utilization of the medium can drop tenfold. Most of the optimization mechanisms proposed in the literature are based on adapting the DCF backoff parameters to the estimate of the number of competing terminals in the network. However, existing estimation algorithms are either inaccurate or too complex. In this paper, we propose an enhanced version of the IEEE 802.11 DCF that employs an adaptive estimator of the number of competing terminals based on sequential Monte Carlo methods. The algorithm uses a Bayesian approach, optimizing the backoff parameters of the DCF based on the predictive distribution of the number of competing terminals. We show that our algorithm is simple yet highly accurate even at small time scales. We implement our proposed new DCF in the ns-2 simulator and show that it outperforms existing methods. We also show that its accuracy can be used to improve the results of the protocol even when the terminals are not in saturation mode. Moreover, we show that there exists a Nash equilibrium strategy that prevents rogue terminals from changing their parameters for their own benefit, making the algorithm safely applicable in a complete distributed fashion", "In wireless LANs (WLANs), the medium access control (MAC) protocol is the main element that determines the efficiency in sharing the limited communication bandwidth of the wireless channel. In this paper we focus on the efficiency of the IEEE 802.11 standard for WLANs. Specifically, we analytically derive the average size of the contention window that maximizes the throughput, hereafter theoretical throughput limit, and we show that: 1) depending on the network configuration, the standard can operate very far from the theoretical throughput limit; and 2) an appropriate tuning of the backoff algorithm can drive the IEEE 802.11 protocol close to the theoretical throughput limit. Hence we propose a distributed algorithm that enables each station to tune its backoff algorithm at run-time. The performances of the IEEE 802.11 protocol, enhanced with our algorithm, are extensively investigated by simulation. Specifically, we investigate the sensitiveness of our algorithm to some network configuration parameters (number of active stations, presence of hidden terminals). Our results indicate that the capacity of the enhanced protocol is very close to the theoretical upper bound in all the configurations analyzed." ] }
1011.4654
2949790096
Carrier Sense Multiple Access with Enhanced Collision Avoidance (CSMA ECA) is a distributed MAC protocol that allows collision-free access to the medium in WLAN. The only difference between CSMA ECA and the well-known CSMA CA is that the former uses a deterministic backoff after successful transmissions. Collision-free operation is reached after a transient state during which some collisions may occur. This article shows that the duration of the transient state can be shortened by appropriately setting the contention parameters. Standard absorbing Markov Chain theory can be used to describe the behaviour of the system in the transient state and to predict the expected number of slots to reach the collision-free operation. The article also introduces CSMA E2CA, in which a deterministic backoff is used two consecutive times after a successful transmission. CSMA E2CA converges quicker to collision-free operation and delivers higher performance than CSMA CA in harsh wireless scenarios with high frame error rates. To achieve collision-free operations when the number of contenders is large, it may be necessary to dynamically adjust the contention parameter. The last part of the article suggests an approach for such parameter adjustment which is validated by simulation results.
A first example of these protocols is the Enhanced Backoff Algorithm (EBA) introduced in @cite_22 , which proposes that each station announces its backoff intentions in a special header to avoid collisions. A caveat is that currently deployed hardware would discard EBA packets since these are not IEEE 802.11 compliant and thus backward compatibility is compromised.
{ "cite_N": [ "@cite_22" ], "mid": [ "2129374252" ], "abstract": [ "The IEEE 802.11 standard for wireless local area networks (WLANs) employs a medium access control (MAC), called distributed coordination function (DCF), which is based on carrier sense multiple access with collision avoidance (CSMA CA). The collision avoidance mechanism utilizes the random backoff prior to each frame transmission attempt. The random nature of the backoff reduces the collision probability, but cannot completely eliminate collisions. It is known that the throughput performance of the 802.11 WLAN is significantly compromised as the number of stations increases. In this paper, we propose a novel distributed reservation-based MAC protocol, called early backoff announcement (EBA), which is backward compatible with the legacy DCF. Under EBA, a station announces its future backoff information in terms of the number of backoff slots via the MAC header of its frame being transmitted. All the stations receiving the information avoid collisions by excluding the same backoff duration when selecting their future backoff value. Through extensive simulations, EBA is found to achieve a significant increase in the throughput performance as well as a higher degree of fairness compared to the 802.11 DCF." ] }
1011.4654
2949790096
Carrier Sense Multiple Access with Enhanced Collision Avoidance (CSMA ECA) is a distributed MAC protocol that allows collision-free access to the medium in WLAN. The only difference between CSMA ECA and the well-known CSMA CA is that the former uses a deterministic backoff after successful transmissions. Collision-free operation is reached after a transient state during which some collisions may occur. This article shows that the duration of the transient state can be shortened by appropriately setting the contention parameters. Standard absorbing Markov Chain theory can be used to describe the behaviour of the system in the transient state and to predict the expected number of slots to reach the collision-free operation. The article also introduces CSMA E2CA, in which a deterministic backoff is used two consecutive times after a successful transmission. CSMA E2CA converges quicker to collision-free operation and delivers higher performance than CSMA CA in harsh wireless scenarios with high frame error rates. To achieve collision-free operations when the number of contenders is large, it may be necessary to dynamically adjust the contention parameter. The last part of the article suggests an approach for such parameter adjustment which is validated by simulation results.
ZeroCollision @cite_20 is a MAC protocol that also converges to a collision-free mode of operation. The operation of ZeroCollision is similar to Reservation-Aloha. The slots are grouped in rounds, each round containing @math slots and the stations keep track of the status of each slot (either busy or empty). A station that successfully transmits in the @math -th slot ( @math ) obtains an implicit reservation for that slot. If this station has more packets to transmit, it will use again the @math -th slot in the next transmission round. Those stations that have no reservation, transmit randomly in any of the slots that were empty in the previous round.
{ "cite_N": [ "@cite_20" ], "mid": [ "1484545291" ], "abstract": [ "This paper proposes and analyzes a distributed MAC protocol that achieves zero collision with no control message exchange nor synchronization. ZC (ZeroCollision) is neither reservation-based nor dynamic TDMA; the protocol supports variable-length packets and does not lose efficiency when some of the stations do not transmit. At the same time, ZC is not a CSMA; in its steady state, it is completely collision-free. The stations transmit repeatedly in a round-robin order once the convergence state is reached. If some stations skip their turn, their transmissions are replaced by idle @math -second mini-slots that enable the other stations to keep track of their order. Because of its short medium access delay and its efficiency, the protocol supports both real-time and elastic applications. The protocol allows for nodes leaving and joining the network; it can allocate more throughput to specific nodes (such as an access point). The protocol is robust against carrier sensing errors or clock drift. While collision avoidance is guaranteed in a single collision domain, it is not the case in a multiple collision one. However, experiments show ZC supports a comparable amount of goodput to CSMA in a multiple collision domain environment. The paper presents an analysis and extensive simulations of the protocol, confirming that ZC outperforms both CSMA and TDMA at high and low load." ] }
1011.4654
2949790096
Carrier Sense Multiple Access with Enhanced Collision Avoidance (CSMA ECA) is a distributed MAC protocol that allows collision-free access to the medium in WLAN. The only difference between CSMA ECA and the well-known CSMA CA is that the former uses a deterministic backoff after successful transmissions. Collision-free operation is reached after a transient state during which some collisions may occur. This article shows that the duration of the transient state can be shortened by appropriately setting the contention parameters. Standard absorbing Markov Chain theory can be used to describe the behaviour of the system in the transient state and to predict the expected number of slots to reach the collision-free operation. The article also introduces CSMA E2CA, in which a deterministic backoff is used two consecutive times after a successful transmission. CSMA E2CA converges quicker to collision-free operation and delivers higher performance than CSMA CA in harsh wireless scenarios with high frame error rates. To achieve collision-free operations when the number of contenders is large, it may be necessary to dynamically adjust the contention parameter. The last part of the article suggests an approach for such parameter adjustment which is validated by simulation results.
An insightful comparison between CSMA ECA and ZeroCollision is presented in @cite_8 . ZeroCollision converges quicker to the collision-free operation and CSMA ECA presents a behaviour that is more similar to current CSMA CA. This similarity may ease the implementation of CSMA ECA. The results in this reference also show that ZeroCollision is superior to CSMA ECA in the presence of channel errors. Additionally, the cited article also extends both protocols to converge quicker to collision-free operation, to deliver higher performance in lossy channels, and to operate in a collision-free fashion in the presence of a large number of stations. The names of these extended protocols are L-ZC, L-MAC, A-L-ZC and A-L-MAC. Furthermore, the concept of is introduced. We will revisit this concept in Section as a means to substantially improve the performance of CSMA ECA.
{ "cite_N": [ "@cite_8" ], "mid": [ "1586023924" ], "abstract": [ "By combining the features of CSMA and TDMA, fully decentralised WLAN MAC schemes have recently been proposed that converge to collision-free schedules. In this paper we describe a MAC with optimal long-run throughput that is almost decentralised. We then design two schemes that are practically realisable, decentralised approximations of this optimal scheme and operate with different amounts of sensing information. We achieve this by (1) introducing learning algorithms that can substantially speed up convergence to collision free operation; (2) developing a decentralised schedule length adaptation scheme that provides long-run fair (uniform) access to the medium while maintaining collision-free access for arbitrary numbers of stations." ] }
1011.4654
2949790096
Carrier Sense Multiple Access with Enhanced Collision Avoidance (CSMA ECA) is a distributed MAC protocol that allows collision-free access to the medium in WLAN. The only difference between CSMA ECA and the well-known CSMA CA is that the former uses a deterministic backoff after successful transmissions. Collision-free operation is reached after a transient state during which some collisions may occur. This article shows that the duration of the transient state can be shortened by appropriately setting the contention parameters. Standard absorbing Markov Chain theory can be used to describe the behaviour of the system in the transient state and to predict the expected number of slots to reach the collision-free operation. The article also introduces CSMA E2CA, in which a deterministic backoff is used two consecutive times after a successful transmission. CSMA E2CA converges quicker to collision-free operation and delivers higher performance than CSMA CA in harsh wireless scenarios with high frame error rates. To achieve collision-free operations when the number of contenders is large, it may be necessary to dynamically adjust the contention parameter. The last part of the article suggests an approach for such parameter adjustment which is validated by simulation results.
The idea of using a deterministic backoff after successful transmissions is also presented in @cite_21 , where it is called Semi-Random Backoff (SBR). This reference studies the convergence to collision-free operation, the collision probability and the performance in presence of hidden terminals. Delay measures are also presented and many other performance issues are explored. It is concluded that SBR performs equal or better than the random backoff in all possible scenarios.
{ "cite_N": [ "@cite_21" ], "mid": [ "2143747785" ], "abstract": [ "This paper proposes a semi-random backoff (SRB) method that enables resource reservation in contention-based wireless LANs. The proposed SRB is fundamentally different from traditional random backoff methods because it provides an easy migration path from random backoffs to deterministic slot assignments. The central idea of the SRB is for the wireless station to set its backoff counter to a deterministic value upon a successful packet transmission. This deterministic value will allow the station to reuse the time-slot in consecutive backoff cycles. When multiple stations with successful packet transmissions reuse their respective time-slots, the collision probability is reduced, and the channel achieves the equivalence of resource reservation. In case of a failed packet transmission, a station will revert to the standard random backoff method and probe for a new available time-slot. The proposed SRB method can be readily applied to both 802.11 DCF and 802.11e EDCA networks with minimum modification to the existing DCF EDCA implementations. Theoretical analysis and simulation results validate the superior performance of the SRB for small-scale and heavily loaded wireless LANs. When combined with an adaptive mechanism and a persistent backoff process, SRB can also be effective for large-scale and lightly loaded wireless networks." ] }
1011.4654
2949790096
Carrier Sense Multiple Access with Enhanced Collision Avoidance (CSMA ECA) is a distributed MAC protocol that allows collision-free access to the medium in WLAN. The only difference between CSMA ECA and the well-known CSMA CA is that the former uses a deterministic backoff after successful transmissions. Collision-free operation is reached after a transient state during which some collisions may occur. This article shows that the duration of the transient state can be shortened by appropriately setting the contention parameters. Standard absorbing Markov Chain theory can be used to describe the behaviour of the system in the transient state and to predict the expected number of slots to reach the collision-free operation. The article also introduces CSMA E2CA, in which a deterministic backoff is used two consecutive times after a successful transmission. CSMA E2CA converges quicker to collision-free operation and delivers higher performance than CSMA CA in harsh wireless scenarios with high frame error rates. To achieve collision-free operations when the number of contenders is large, it may be necessary to dynamically adjust the contention parameter. The last part of the article suggests an approach for such parameter adjustment which is validated by simulation results.
The focus of the present paper is on single-hop WLAN communications. The broad idea of learning MAC protocols, stickiness and convergence to a collision-free schedule has been studied in the context of wireless multi-hop networks in @cite_12 @cite_5 @cite_16 .
{ "cite_N": [ "@cite_5", "@cite_16", "@cite_12" ], "mid": [ "2136136555", "2161089045", "2127380142" ], "abstract": [ "We propose a novel approach to QoS for real-time traffic over wireless mesh networks, in which application layer characteristics are exploited or shaped in the design of medium access control. Specifically, we consider the problem of efficiently supporting a mix of Voice over IP (VoIP) and delay-insensitive traffic, assuming a narrowband physical layer with CSMA CA capabilities. The VoIP call carrying capacity of wireless mesh networks based on classical CSMA CA (e.g., the IEEE 802.11 standard) is low compared to the raw available bandwidth, due to lack of bandwidth and delay guarantees. Time Division Multiplexing (TDM) could potentially provide such guarantees, but it requires fine-grained network-wide synchronization and scheduling, which are difficult to implement. In this paper, we introduce Sticky CSMA CA, a new medium access mechanism that provides TDM-like performance to real-time flows without requiring explicit synchronization. We exploit the natural periodicity of VoIP flows to obtain implicit synchronization and multiplexing gains. Nodes monitor the medium using the standard CSMA CA mechanism, except that they remember the recent history of activity in the medium. A newly arriving VoIP flow uses this information to grab the medium at the first available opportunity, and then sticks to a periodic schedule, providing delay and bandwidth guarantees. Delay-insensitive traffic fills the gaps left by the real-time flows using novel contention mechanisms to ensure efficient use of the leftover bandwidth. Large gains over IEEE 802.11 networks are demonstrated in terms of increased voice call carrying capacity (more than 100 in some cases). We briefly discuss extensions of these ideas to a broader class of real-time applications, in which artificially imposing periodicity (or some other form of regularity) at the application layer can lead to significant enhancements of QoS due to improved medium access.", "Personal communications and mobile computing will require a wireless network infrastructure which is fast deployable, possibly multihop, and capable of multimedia service support. The first infracture of this type was the packet radio network (PRNET), developed in the 70's to address the battlefield and disaster recovery communication requirements. PRNET was totally asynchronous and was based on a completely distributed architecture. It handled datagram traffic reasonably well, but did not offer efficient multimedia support. Recently, under the WAMIS and Glomo ARPA programs several mobile, multimedia, multihop (M sup 3 ) wireless network architectures have been developed, which assume some form of synchronous, time division infrastructure. The synchronous time frame leads to efficient multimedia support implementations. However, it introduces more complexity and is less robust in the face of mobility and channel fading. In this paper; we examine the impact of synchronization on wireless M sup 3 network performance. First, we introduce MACA PR, an asynchronous network based on the collision avoidance MAC scheme employed in the IEEE 802.11 standard. There, we evaluate and compare several wireless packet networks ranging from the total asynchronous PRNET to the synchronized cluster TDMA network. We examine the tradeoffs between time synchronization and performance in various traffic and mobility environments.", "Aggregate traffic loads and topology in multihop wireless networks may vary slowly, permitting MAC protocols to \"learn\" how to spatially coordinate and adapt contention patterns. Such an approach could reduce contention, leading to better throughput. To that end, we propose a family of MAC scheduling algorithms and demonstrate general conditions, which, if satisfied, ensure lattice rate optimality (i.e., achieving any rate-point on a uniform discrete lattice within the throughput region). This general framework enables the design of MAC protocols that meet various objectives and conditions. In this paper, as instances of such a lattice-rate-optimal family, we propose distributed, synchronous contention-based scheduling algorithms that: 1) are lattice-rate-optimal under both the signal-to-interference-plus-noise ratio (SINR)-based and graph-based interference models; 2) do not require node location information; and 3) only require three-stage RTS CTS message exchanges for contention signaling. Thus, the protocols are amenable to simple implementation and may be robust to network dynamics such as topology and load changes. Finally, we propose a heuristic, which also belongs to the proposed lattice-rate-optimal family of protocols and achieves faster convergence, leading to a better transient throughput." ] }
1011.3840
2951436137
A celebrated theorem of Savitch states that NSPACE(S) is contained in DSPACE(S^2). In particular, Savitch gave a deterministic algorithm to solve ST-CONNECTIVITY (an NL-complete problem) using O(log^2 n ) space, implying NL is in DSPACE(log^2 n ). While Savitch's theorem itself has not been improved in the last four decades, studying the space complexity of several special cases of ST-CONNECTIVITY has provided new insights into the space-bounded complexity classes. In this paper, we introduce new kind of graph connectivity problems which we call graph realizability problems. All of our graph realizability problems are generalizations of UNDIRECTED ST-CONNECTIVITY. ST-REALIZABILITY, the most general graph realizability problem, is LogCFL-complete. We define the corresponding complexity classes that lie between L and LogCFL and study their relationships. As special cases of our graph realizability problems we define two natural problems, BALANCED ST-CONNECTIVITY and POSITIVE BALANCED ST-CONNECTIVITY, that lie between L and NL. We present a deterministic O(lognloglogn) space algorithm for BALANCED ST-CONNECTIVITY. More generally we prove that SGSLogCFL, a generalization of BALANCED ST-CONNECTIVITY, is contained in DSPACE(lognloglogn). To achieve this goal we generalize several concepts (such as graph squaring and transitive closure) and algorithms (such as parallel algorithms) known in the context of UNDIRECTED ST-CONNECTIVITY.
Symmetric AuxPDAs : In sec:ustconn , we define , a symmetric" version of . To study the space complexity of we define symmetric auxiliary pushdown automata, a natural generalization of symmetric Turing machines introduced by Lewis and Papadimitriou @cite_17 . We introduce a new complexity class called , a generalization of and show that @math @math .
{ "cite_N": [ "@cite_17" ], "mid": [ "2039302045" ], "abstract": [ "Abstract A symmetric Turing machine is one such that the “yields” relation between configurations is symmetric. The space complexity classes for such machines are found to be intermediate between the corresponding deterministic and nondeterministic space complexity classes. Certain natural problems are shown to be complete for symmetric space complexity classes, and the relationship of symmetry to determinism and nondeterminism is investigated." ] }
1011.3588
1778912783
In this paper, we consider a Gaussian multiple access channel with multiple independent additive white Gaussian interferences. Each interference is known to exactly one transmitter non-causally. The capacity region is characterized to within a constant gap regardless of channel parameters. These results are based on a layered modulo-lattice scheme which realizes distributed interference cancellation.
State-dependent networks with partial state knowledge available at different nodes have been studied in various scenarios. Kotagiri al @cite_7 study the state-dependent two-user MAC with state non-causally known to a transmitter, and for the Gaussian case they characterize the capacity asymptotically at infinite interference ( @math ) as the informed transmitter's power grows to infinity. Somekh-Baruch al @cite_1 study the problem with the same set-up as @cite_7 while the informed transmitter knows the other's message, and they characterize the capacity region completely. Zaidi al @cite_15 study another case of degraded message set. The achievability part of @cite_7 , @cite_1 , and @cite_15 are based on random binning. Philosof al @cite_19 , on the other hand, characterize the capacity region of the doubly-dirty MAC to within a constant gap at infinite interference (i.e., @math , @math ), by lattice strategies @cite_14 . They also show that strategies based on Gaussian random binning is unboundedly worse than lattice-based strategies. Zaidi al @cite_11 @cite_13 and Akhbari al @cite_12 study a state-dependent relay channel where the state is only known either at the source or the relay.
{ "cite_N": [ "@cite_14", "@cite_7", "@cite_1", "@cite_19", "@cite_15", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2143664144", "2087705319", "1970816336", "2952966314", "2110016660", "2090210622", "2152137554", "2114973618" ], "abstract": [ "We consider the generalized dirty-paper channel Y=X+S+N,E X sup 2 spl les P sub X , where N is not necessarily Gaussian, and the interference S is known causally or noncausally to the transmitter. We derive worst case capacity formulas and strategies for \"strong\" or arbitrarily varying interference. In the causal side information (SI) case, we develop a capacity formula based on minimum noise entropy strategies. We then show that strategies associated with entropy-constrained quantizers provide lower and upper bounds on the capacity. At high signal-to-noise ratio (SNR) conditions, i.e., if N is weak relative to the power constraint P sub X , these bounds coincide, the optimum strategies take the form of scalar lattice quantizers, and the capacity loss due to not having S at the receiver is shown to be exactly the \"shaping gain\" 1 2log(2 spl pi e 12) spl ap 0.254 bit. We extend the schemes to obtain achievable rates at any SNR and to noncausal SI, by incorporating minimum mean-squared error (MMSE) scaling, and by using k-dimensional lattices. For Gaussian N, the capacity loss of this scheme is upper-bounded by 1 2log2 spl pi eG( spl Lambda ), where G( spl Lambda ) is the normalized second moment of the lattice. With a proper choice of lattice, the loss goes to zero as the dimension k goes to infinity, in agreement with the results of Costa. These results provide an information-theoretic framework for the study of common communication problems such as precoding for intersymbol interference (ISI) channels and broadcast channels.", "We consider a state-dependent multiaccess channel (MAC) with state noncausally known to some encoders. For simplicity of exposition, we focus on a two-encoder model in which one of the encoders has noncausal access to the channel state. The results can in principle be extended to any number of encoders with a subset of them being informed. We derive an inner bound for the capacity region in the general discrete memoryless case and specialize to a binary noiseless case. In binary noiseless case, we compare the inner bounds with trivial outer bounds obtained by providing the channel state to the decoder. In the case of maximum entropy channel state, we obtain the capacity region for binary noiseless MAC with one informed encoder. For a Gaussian state-dependent MAC with one encoder being informed of the channel state, we present an inner bound by applying a slightly generalized dirty paper coding (GDPC) at the informed encoder and a trivial outer bound by providing channel state to the decoder also. In particular, if the channel input is negatively correlated with the channel state in the random coding distribution, then GDPC can be interpreted as partial state cancellation followed by standard dirty paper coding. The uninformed encoders benefit from the state cancellation in terms of achievable rates, however, it seems that GDPC cannot completely eliminate the effect of the channel state on the achievable rate region, in contrast to the case of all encoders being informed. In the case of infinite state variance, we provide an inner bound and also provide a nontrivial outer bound for this case which is better than the trivial outer bound.", "We generalize the Gel'fand-Pinsker model to encompass the setup of a memoryless multiple-access channel (MAC). According to this setup, only one of the encoders knows the state of the channel (noncausally), which is also unknown to the receiver. Two independent messages are transmitted: a common message and a message transmitted by the informed encoder. We find explicit characterizations of the capacity region with both noncausal and causal state information. Further, we study the noise-free binary case, and we also apply the general formula to the Gaussian case with noncausal channel state information, under an individual power constraint as well as a sum power constraint. In this case, the capacity region is achievable by a generalized writing-on-dirty-paper scheme.", "A generalization of the Gaussian dirty-paper problem to a multiple access setup is considered. There are two additive interference signals, one known to each transmitter but none to the receiver. The rates achievable using Costa's strategies (i.e. by a random binning scheme induced by Costa's auxiliary random variables) vanish in the limit when the interference signals are strong. In contrast, it is shown that lattice strategies (\"lattice precoding\") can achieve positive rates independent of the interferences, and in fact in some cases - which depend on the noise variance and power constraints - they are optimal. In particular, lattice strategies are optimal in the limit of high SNR. It is also shown that the gap between the achievable rate region and the capacity region is at most 0.167 bit. Thus, the dirty MAC is another instance of a network setup, like the Korner-Marton modulo-two sum problem, where linear coding is potentially better than random binning. Lattice transmission schemes and conditions for optimality for the asymmetric case, where there is only one interference which is known to one of the users (who serves as a \"helper\" to the other user), and for the \"common interference\" case are also derived. In the former case the gap between the helper achievable rate and its capacity is at most 0.085 bit.", "We consider a two-user state-dependent multiaccess channel in which only one of the encoders is informed, non-causally, of the channel states. Two independent messages are transmitted: a common message transmitted by both the informed and uninformed encoders, and an individual message transmitted by only the uninformed encoder. We derive inner and outer bounds on the capacity region of this model in the discrete memoryless case as well as the Gaussian case. Further, we show that the bounds for the Gaussian case are tight in some special cases.", "We consider a three-terminal state-dependent relay channel with the channel state available non-causally at only the source. Such a model may be of interest for node cooperation in the framework of cognition, i.e., collaborative signal transmission involving cognitive and non-cognitive radios. We study the capacity of this communication model. One principal problem in this setup is caused by the relay's not knowing the channel state. In the discrete memoryless (DM) case, we establish lower bounds on channel capacity. For the Gaussian case, we derive lower and upper bounds on the channel capacity. The upper bound is strictly better than the cut-set upper bound. We show that one of the developed lower bounds comes close to the upper bound, asymptotically, for certain ranges of rates.", "In this paper, we consider a discrete memoryless state-dependent relay channel with non-causal Channel State Information (CSI). We investigate three different cases in which perfect channel states can be known non-causally: i) only to the source, ii) only to the relay or iii) both to the source and to the relay node. For these three cases we establish lower bounds on the channel capacity (achievable rates) based on using Gel'fand-Pinsker coding at the nodes where the CSI is available and using Compress-and-Forward (CF) strategy at the relay. Furthermore, for the general Gaussian relay channel with additive independent and identically distributed (i.i.d) states and noise, we obtain lower bounds on the capacity for the cases in which CSI is available at the source or at the relay. We also compare our derived bounds with the previously obtained results which were based on Decode-and-Forward (DF) strategy, and we show the cases in which our derived lower bounds outperform DF based bounds, and can achieve the rates close to the upper bound.", "In this paper, we consider a three-terminal state-dependent relay channel (RC) with the channel state noncausally available at only the relay. Such a model may be useful for designing cooperative wireless networks with some terminals equipped with cognition capabilities, i.e., the relay in our setup. In the discrete memoryless (DM) case, we establish lower and upper bounds on channel capacity. The lower bound is obtained by a coding scheme at the relay that uses a combination of codeword splitting, Gel'fand-Pinsker binning, and decode-and-forward (DF) relaying. The upper bound improves upon that obtained by assuming that the channel state is available at the source, the relay, and the destination. For the Gaussian case, we also derive lower and upper bounds on the capacity. The lower bound is obtained by a coding scheme at the relay that uses a combination of codeword splitting, generalized dirty paper coding (DPC), and DF relaying; the upper bound is also better than that obtained by assuming that the channel state is available at the source, the relay, and the destination. In the case of degraded Gaussian channels, the lower bound meets with the upper bound for some special cases, and, so, the capacity is obtained for these cases. Furthermore, in the Gaussian case, we also extend the results to the case in which the relay operates in a half-duplex mode." ] }
1011.2538
2139919401
A variety of applications are emerging to support streaming video from mobile devices. However, many tasks can benefit from streaming specific content rather than the full video feed which may include irrelevant, private, or distracting content. We describe a system that allows users to capture and stream targeted video content captured with a mobile device. The application incorporates a variety of automatic and interactive techniques to identify and segment desired content, allowing the user to publish a more focused video stream.
While other research projects have explored video retargeting, or automatically selecting salient subregions of a video for redisplay on smaller screens such as mobile devices @cite_6 , mVideoCast uniquely allows users to stream specific ROIs from a mobile device.
{ "cite_N": [ "@cite_6" ], "mid": [ "1980852932" ], "abstract": [ "When a video is displayed on a smaller display than originally intended, some of the information in the video is necessarily lost. In this paper, we introduce Video Retargeting that adapts video to better suit the target display, minimizing the important information lost. We define a framework that measures the preservation of the source material, and methods for estimating the important information in the video. Video retargeting crops each frame and scales it to fit the target display. An optimization process minimizes information loss by balancing the loss of detail due to scaling with the loss of content and composition due to cropping. The cropping window can be moved during a shot to introduce virtual pans and cuts, subject to constraints that ensure cinematic plausibility. We demonstrate results of adapting a variety of source videos to small display sizes." ] }
1011.2538
2139919401
A variety of applications are emerging to support streaming video from mobile devices. However, many tasks can benefit from streaming specific content rather than the full video feed which may include irrelevant, private, or distracting content. We describe a system that allows users to capture and stream targeted video content captured with a mobile device. The application incorporates a variety of automatic and interactive techniques to identify and segment desired content, allowing the user to publish a more focused video stream.
There have been a variety of applications that have used ROIs in video in non-mobile contexts. Researchers have investigated user- and group-defined ROIs to control cameras for remote collaboration tasks @cite_9 @cite_4 . Similarly, the Diver system allows users to create videos from cropped clips of a prerecorded, panoramic video @cite_7 . Other tools have explored automated solutions. El- investigated automatically cropping surveillance videos to salient events @cite_3 . Another focus of past work is the removal of individuals from video recordings or video conference streams (such as @cite_1 ).
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_9", "@cite_1", "@cite_3" ], "mid": [ "77505759", "1989747326", "2052258039", "2049579184", "" ], "abstract": [ "We propose the ShareCam Problem: controlling a single robotic pan, tilt, zoom camera based on simultaneous frame requests from n online users. To solve it, we propose a new piecewise linear metric, Intersection Over Maximum (IOM), for the degree of satisfaction for each users. To maximize overall satisfaction, we present several algorithms. For a discrete set of m distinct zoom levels, we give an exact algorithm that runs in O(n 2 m) time. The algorithm can be distributed to run in O(nm) time at each client and in O(nlog n + mn) time at the server.", "The digital interactive video exploration and reflection (Diver) system lets users create virtual pathways through existing video content using a virtual camera and an annotation window for commentary. Users can post their Dives to the WebDiver server system to generate active collaboration, further repurposing, and discussion. Although our current work focuses on video records in learning research and educational practices, Diver can aid collaborative analysis of a broad array of visual data records, including simulations, 2D and 3D animations, and static works of art, photography, and text. In addition to the social and behavioral sciences, substantive application areas include medical visualization, astronomic data or cosmological models, military satellite intelligence, and ethnology and animal behavior. Diver-style user-centered video repurposing might also prove compelling for popular media with commercial application involving sports events, movies, television shows, and video gaming. Future technical development includes possible enhancements to the interface to support simultaneous display of multiple Dives on the same source content, a more fluid two-way relation between desktop Diver and WebDiver, and solutions to the current limitations on displaying and authoring time space cropped videos in a browser context. These developments support the tool's fundamentally collaborative, communication-oriented nature.", "FlySPEC is a video camera system designed for real-time remote operation. A hybrid design combines the high resolution of an optomechanical video camera with the wide field of view always available from a panoramic camera. The control system integrates requests from multiple users so that each controls a virtual camera. The control system seamlessly integrates manual and fully automatic control. It supports a range of options from untended automatic to full manual control. The system can also learn control strategies from user requests. Additionally, the panoramic view is always available for an intuitive interface, and objects are never out of view regardless of the zoom factor. We present the system architecture, an information-theoretic approach to combining panoramic and zoomed images to optimally satisfy user requests, and experimental results that show the FlySPEC system significantly assists users in a remote inspection tasks.", "This paper presents a system for protecting the privacy of specific individuals in video recordings. We address the following two problems: automatic people identification with limited labeled data, and human body obscuring with preserved structure and motion information. In order to address the first problem, we propose a new discriminative learning algorithm to improve people identification accuracy using limited training data labeled from the original video and imperfect pairwise constraints labeled from face obscured video data. We employ a robust face detection and tracking algorithm to obscure human faces in the video. Our experiments in a nursing home environment show that the system can obtain a high accuracy of people identification using limited labeled data and noisy pairwise constraints. The study result indicates that human subjects can perform reasonably well in labeling pairwise constraints with the face masked data. For the second problem, we propose a novel method of body obscuring, which removes the appearance information of the people while preserving rich structure and motion information. The proposed approach provides a way to minimize the risk of exposing the identities of the protected people while maximizing the use of the captured data for activity behavior analysis.", "" ] }
1011.2807
1506701710
The K-Nearest Neighbor (KNN) join is an expensive but important operation in many data mining algorithms. Several recent applications need to perform KNN join for high dimensional sparse data. Unfortunately, all existing KNN join algorithms are designed for low dimensional data. To fulfill this void, we investigate the KNN join problem for high dimensional sparse data. In this paper, we propose three KNN join algorithms: a brute force (BF) algorithm, an inverted index-based(IIB) algorithm and an improved inverted index-based(IIIB) algorithm. Extensive experiments on both synthetic and real-world datasets were conducted to demonstrate the effectiveness of our algorithms for high dimensional sparse data.
@cite_7 deal with high dimensional sparse data, but their objective is to find all pairs whose similarity score is above a threshold rather than finding each point with its @math nearest neighbors. Meanwhile, they focus on self similarity search and propose an efficient in-memory approach.
{ "cite_N": [ "@cite_7" ], "mid": [ "2097776316" ], "abstract": [ "Given a large collection of sparse vector data in a high dimensional space, we investigate the problem of finding all pairs of vectors whose similarity score (as determined by a function such as cosine distance) is above a given threshold. We propose a simple algorithm based on novel indexing and optimization strategies that solves this problem without relying on approximation methods or extensive parameter tuning. We show the approach efficiently handles a variety of datasets across a wide setting of similarity thresholds, with large speedups over previous state-of-the-art approaches." ] }
1011.0950
1486133176
The task of verifying the compatibility between interacting web services has traditionally been limited to checking the compatibility of the interaction protocol in terms of message sequences and the type of data being exchanged. Since web services are developed largely in an uncoordinated way, different services often use independently developed ontologies for the same domain instead of adhering to a single ontology as standard. In this work we investigate the approaches that can be taken by the server to verify the possibility to reach a state with semantically inconsistent results during the execution of a protocol with a client, if the client ontology is published. Often database is used to store the actual data along with the ontologies instead of storing the actual data as a part of the ontology description. It is important to observe that at the current state of the database the semantic conflict state may not be reached even if the verification done by the server indicates the possibility of reaching a conflict state. A relational algebra based decision procedure is also developed to incorporate the current state of the client and the server databases in the overall verification procedure.
On the other hand the current research in semantic web is focused towards the standardization of the ontology used by the web services with a vision of computers becoming capable of analyzing all web data. Semantic matchmaking @cite_4 @cite_6 and discovery of semantic web services @cite_2 @cite_16 @cite_19 are two important research directions in semantic web. The underlying objective of these approaches is to compare facts belonging to different ontologies and to evaluate their compatibility. Standards like RDF, OWL, WSML etc. are developed for this purpose.
{ "cite_N": [ "@cite_4", "@cite_6", "@cite_19", "@cite_2", "@cite_16" ], "mid": [ "2105156533", "2146192209", "2129830608", "2007668401", "2164376763" ], "abstract": [ "Web service is an important technology to implement business to business e-commerce interaction. The first step toward this interaction is the discovery of other services. In this paper we claim that discovery of Web services should be based on the semantic match between service advertisement and service request. We show how service capabilities are presented by the OWL-S profile, an OWL based language for service description. In particular, we describe the design and implementation of a service matchmaking agent which uses an OWL-S based ontology and a OWL reasoner to compare ontology based service descriptions.", "The service discovery based on semantic description plays an important role in the process of Web service composition. Traditional approaches to modeling semantic similarity between Web services compute subsume relationship for function parameters in service profiles within a single ontology. In this paper, we introduce a new graph theoretic framework based on bipartite graph matching for finding the best correspondences among function parameters belonging to advertisement and request. On computing semantic similarity between a pair of function parameters, we present a novel similarity function determining similar entity, which relaxes the requirement of a single ontology and accounts for the different ontology specifications", "The growing number of web services advocates distributed discovery infrastructures which are semantics-enabled and support quality of service (QoS). In this paper, we introduce a novel approach for semantic discovery of web services in P2P-based registries taking into account QoS characteristics. We distribute (semantic) service advertisements among available registries such that it is possible to quickly identify the repositories containing the best probable matching services. Additionally, we represent the information relevant for the discovery process using Bloom filters and pre-computed matching information such that search efforts are minimized when querying for services with a certain functional QoS profile. Query results can be ranked and users can provide feedbacks on the actual QoS provided by a service. To evaluate the credibility of these user reports when predicting service quality, we include a robust trust and reputation management mechanism.", "This paper describes a framework for ontology-based flexible discovery of Semantic Web services. The proposed approach relies on user-supplied, context-specific mappings from an user ontology to relevant domain ontologies used to specify Web services. We show how a user's query for a Web service that meets certain selection criteria can be transformed into queries that can be processed by a matchmaking engine that is aware of the relevant domain ontologies and Web services. We also describe how user-specified preferences for Web services in terms of non-functional requirements (e.g., QoS) can be incorporated into the Web service discovery mechanism to generate a partially ordered list of services that meet user-specified functional requirements.", "We present an approach to hybrid semantic Web service matching that complements logic based reasoning with approximate matching based on syntactic IR based similarity computations. The hybrid matchmaker, called OWLS-MX, applies this approach to services and requests specified in OWL-S. Experimental results of measuring performance and scalability of different variants of OWLS-MX show that under certain constraints logic based only approaches to OWL-S service I O matching can be significantly outperformed by hybrid ones." ] }
1011.0950
1486133176
The task of verifying the compatibility between interacting web services has traditionally been limited to checking the compatibility of the interaction protocol in terms of message sequences and the type of data being exchanged. Since web services are developed largely in an uncoordinated way, different services often use independently developed ontologies for the same domain instead of adhering to a single ontology as standard. In this work we investigate the approaches that can be taken by the server to verify the possibility to reach a state with semantically inconsistent results during the execution of a protocol with a client, if the client ontology is published. Often database is used to store the actual data along with the ontologies instead of storing the actual data as a part of the ontology description. It is important to observe that at the current state of the database the semantic conflict state may not be reached even if the verification done by the server indicates the possibility of reaching a conflict state. A relational algebra based decision procedure is also developed to incorporate the current state of the client and the server databases in the overall verification procedure.
Ontology plays an important role towards enhancing the integration and interoperability of the semantic web services. A significant amount of research has been done towards formalizing the notion of conflict between two ontologies. @cite_7 , authors present a detailed classification of conflicts by distinguishing between and mismatches. @cite_25 authors further generalize the notion of conflicts and classify semantic mismatches into language level mismatches and ontology level mismatches. Then ontology level mismatches are further classified into conceptualization mismatch and explication mismatch. Further research in the same direction @cite_3 adds few new types of conceptualization mismatches. Researchers in @cite_10 present alternative types of conflicts that are primarily relevant to OWL based ontologies. However primary focus of these works is towards the interoperability between two ontologies -- rather than the correctness of the protocol for information exchange with respect to the interpretation.
{ "cite_N": [ "@cite_3", "@cite_10", "@cite_25", "@cite_7" ], "mid": [ "2140642976", "187110590", "1482102309", "1504190376" ], "abstract": [ "Ontology, as the means for conceptualizing and structuring domain knowledge, has become the backbone to enable the fulfillment of the semantic Web vision. But as the number of different online ontologies is growing significantly, the problem of managing semantic heterogeneity is increasingly seen. This semantic heterogeneity originates as a mismatch in the way the domain is interpreted and modeled. Many researches identified the reasons of semantic mismatches and categorized them into three main classes' i.e. linguistic mismatch, conceptualization mismatch and explication mismatch. In this paper, we have identified three more types of conceptualization mismatches. We have demonstrated the importance of such mismatches by giving different scenarios where appropriate. These mismatches should be paid more attention during ontology integration for sound semantic Web environment, because if ignored then we cannot take combined use of independently developed ontologies effectively.", "Ontology provides sharing knowledge among different data sources which will help to clarify the semantics of information. OWL is being promoted as a standard for web ontology language, thus in the future a considerable number of ontologies will be created based on OWL. Therefore automatically detecting semantic conflicts based on OWL will greatly expedite the step to achieve semantic interoperability, and will greatly reduce the manual work to detect semantic conflicts. In this paper, we summarize seven cases based on OWL in which semantic conflicts are easily to be encountered, and for each case, we give a rule to resolve the conflict. Via a series of examples, we show how this method works. Based on the seven cases, we propose a semantic conflicts detection and resolution algorithm. This aims at providing a semi-automatic method for semantic uniform data interoperability and integration.", "With the grown availability of large and specialized online ontologies, the questions about the combined use of independently developed ontologies have become even more important. Although there is already a lot of research done in this area, there are still many open questions. In this paper we try to classify the problems that may arise into a common framework. We then use that framework to examine several projects that aim at some ontology combination task, thus sketching the state of the art. We conclude with an overview of the different approaches and some recommandations for future research.", "The growth of the Internet has revitalised research on the integration of heterogeneous information sources. Integration efforts face a trade off between interoperability and heterogeneity. Important integration obstacles arise from the differences in the underlying ontologies of the various sources. In this paper we investigate the impediments to integration, focussing on ontologies. In particular, we present a classification of ontology mismatches (distinguishing conceptualisation mismatches and explication mismatches as its main categories), and discuss how each of the mismatch types can be dealt with. The idea is that knowing which ontology mismatches are difficult to deal with may contribute to finding a balance between heterogeneity and interoperability." ] }
1011.0950
1486133176
The task of verifying the compatibility between interacting web services has traditionally been limited to checking the compatibility of the interaction protocol in terms of message sequences and the type of data being exchanged. Since web services are developed largely in an uncoordinated way, different services often use independently developed ontologies for the same domain instead of adhering to a single ontology as standard. In this work we investigate the approaches that can be taken by the server to verify the possibility to reach a state with semantically inconsistent results during the execution of a protocol with a client, if the client ontology is published. Often database is used to store the actual data along with the ontologies instead of storing the actual data as a part of the ontology description. It is important to observe that at the current state of the database the semantic conflict state may not be reached even if the verification done by the server indicates the possibility of reaching a conflict state. A relational algebra based decision procedure is also developed to incorporate the current state of the client and the server databases in the overall verification procedure.
Ontology mapping primarily focuses on combining multiple heterogeneous ontologies. @cite_9 authors address the problem of specifying a mapping between a global and a set of local ontologies. @cite_8 authors discuss about establishing a mapping between local ontologies. @cite_21 the problem of ontology alignment and automatic merging is addressed.
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_8" ], "mid": [ "65486233", "1590030008", "2001737243" ], "abstract": [ "One of the basic problems in the development of techniques for the semantic web is the integration of ontologies. Indeed, the web is constituted by a variety of information sources, each expressed over a certain ontology, and in order to extract information from such sources, their semantic integration and reconciliation in terms of a global ontology is required. In this paper, we address the fundamental problem of how to specify the mapping between the global ontology and the local ontologies. We argue that for capturing such mapping in an appropriate way, the notion of query is a crucial one, since it is very likely that a concept in one ontology corresponds to a view (i.e., a query) over the other ontologies. As a result query processing in ontology integration systems is strongly related to view-based query answering in data integration.", "", "Mappings between disparate models are fundamental to any application that requires interoperability between heterogeneous data and applications. Generating mappings is a labor-intensive and error prone task. To build a system that helps users generate mappings, we need an explicit representation of mappings. This representation needs to have well-defined semantics to enable reasoning and comparison between mappings. This paper first presents a powerful framework for defining languages for specifying mappings and their associated semantics. We examine the use of mappings and identify the key inference problems associated with mappings. These properties can be used to determine whether a mapping is adequate in a particular context. Finally, we consider an instance of our framework for a language representing mappings between relational data. We present sound and complete algorithms for the corresponding inference problems." ] }
1011.0950
1486133176
The task of verifying the compatibility between interacting web services has traditionally been limited to checking the compatibility of the interaction protocol in terms of message sequences and the type of data being exchanged. Since web services are developed largely in an uncoordinated way, different services often use independently developed ontologies for the same domain instead of adhering to a single ontology as standard. In this work we investigate the approaches that can be taken by the server to verify the possibility to reach a state with semantically inconsistent results during the execution of a protocol with a client, if the client ontology is published. Often database is used to store the actual data along with the ontologies instead of storing the actual data as a part of the ontology description. It is important to observe that at the current state of the database the semantic conflict state may not be reached even if the verification done by the server indicates the possibility of reaching a conflict state. A relational algebra based decision procedure is also developed to incorporate the current state of the client and the server databases in the overall verification procedure.
Significant amount of research has been done towards the development of the protocol. In @cite_23 researchers proposed a methodology for developing protocols in a multi agent environment. They extend propositional dynamic logic to formally specify the protocol and also use an extension of state-charts for visual representation. In @cite_14 a step by step procedure is presented for the development of web service interaction protocols from the problem definition to the final specification. However these approaches are focused towards the development of protocol for multi agent environment. The semantics of the exchanged data is not addressed in these works.
{ "cite_N": [ "@cite_14", "@cite_23" ], "mid": [ "1569955706", "1507018340" ], "abstract": [ "Much current research is focussed on developing agent interaction protocols (AIPs) that will ensure seamless interaction amongst agents in multi agent systems. The research covers areas such as desired properties of AIPs, reasoning about interaction types, languages and tools for representing AIPs, and implementing AIPs. However, there has been little work on defining the structural make up of an agent interaction protocol, or defining dedicated approaches for developing agent interaction protocols from a clear problem definition to the final specification. This paper addresses these gaps. We present a dedicated approach for developing agent interaction protocols. Our approach is driven by an analysis of the application domain and our proposed structured agent interaction protocol definition.", "Although interaction protocols are often part of multi-agent infrastructures, many of the published protocols are semi-formal, vague or contain errors. Formal presentations can counter such disadvantages since they are amenable to verification of correctness. On the other hand, a diagrammatic representation of system structure is easier to comprehend. To this end, this paper bridges the gap between formal specification and intuitive development by: (1) proposing an extended form of propositional dynamic logic for expressing protocols completely, with clear semantics, that can be converted to a programming language for interaction protocols and (2) developing extended statecharts as a diagrammatic counterpart." ] }
1011.1035
2952510816
The problem of identifying the 3D pose of a known object from a given 2D image has important applications in Computer Vision ranging from robotic vision to image analysis. Our proposed method of registering a 3D model of a known object on a given 2D photo of the object has numerous advantages over existing methods: It does neither require prior training nor learning, nor knowledge of the camera parameters, nor explicit point correspondences or matching features between image and model. Unlike techniques that estimate a partial 3D pose (as in an overhead view of traffic or machine parts on a conveyor belt), our method estimates the complete 3D pose of the object, and works on a single static image from a given view, and under varying and unknown lighting conditions. For this purpose we derive a novel illumination-invariant distance measure between 2D photo and projected 3D model, which is then minimised to find the best pose parameters. Results for vehicle pose detection are presented.
Model based object recognition has received considerable attention in computer vision circles. A survey by Chin and Dyer @cite_18 shows that model based object recognition algorithms generally fall into 3 categories, based on the type of object representation used - namely 2D representations, 2.5D representations or 3D representations.
{ "cite_N": [ "@cite_18" ], "mid": [ "2026311529" ], "abstract": [ "This paper presents a comparative study and survey of model-based object-recognition algorithms for robot vision. The goal of these algorithms is to recognize the identity, position, and orientation of randomly oriented industrial parts. In one form this is commonly referred to as the \"bin-picking\" problem, in which the parts to be recognized are presented in a jumbled bin. The paper is organized according to 2-D, 2½-D, and 3-D object representations, which are used as the basis for the recognition algorithms. Three central issues common to each category, namely, feature extraction, modeling, and matching, are examined in detail. An evaluation and comparison of existing industrial part-recognition systems and algorithms is given, providing insights for progress toward future robot vision systems." ] }
1011.1035
2952510816
The problem of identifying the 3D pose of a known object from a given 2D image has important applications in Computer Vision ranging from robotic vision to image analysis. Our proposed method of registering a 3D model of a known object on a given 2D photo of the object has numerous advantages over existing methods: It does neither require prior training nor learning, nor knowledge of the camera parameters, nor explicit point correspondences or matching features between image and model. Unlike techniques that estimate a partial 3D pose (as in an overhead view of traffic or machine parts on a conveyor belt), our method estimates the complete 3D pose of the object, and works on a single static image from a given view, and under varying and unknown lighting conditions. For this purpose we derive a novel illumination-invariant distance measure between 2D photo and projected 3D model, which is then minimised to find the best pose parameters. Results for vehicle pose detection are presented.
2D representations store the information of a particular 2D view of an object (a characteristic view) as a model and use this information to identify the object from a 2D image. Global feature methods have been used by Gleason and Algin @cite_1 to identify objects like spanners and nuts on a conveyor belt. Such methods use features such as the area, perimeter, number of holes visible and other global features to model the object. Structural features like boundary segments have been used by Perkins @cite_10 to detect machine parts using 2D models. A relational graph method has been used by Yachida and Tsuji @cite_15 to match objects to a 2D model using graph matching techniques. These 2D representation-based algorithms require prior training of the system using a show by example' method.
{ "cite_N": [ "@cite_10", "@cite_15", "@cite_1" ], "mid": [ "2069225252", "1906361164", "" ], "abstract": [ "A vision system has been developed which can determine the position and orientation of complex curved objects in gray-level noisy scenes. The system organizes and reduces the image data from a digitized picture to a compact representation having the appearance of a line drawing. This compact image representation can be used for forming a model under favorable viewing conditions or for locating a part under poor viewing conditions by a matching process that uses a previously formed model. Thus, models are formed automatically by having the program view the part under favorable lighting and background conditions. The compact image representation describes the boundaries of the part.", "As a step to automate assembly of various industrial parts, this paper describes a versatile machine vision system that can recognize a variety of complex industrial parts and measure the necessary parameters for assembly, such as the locations of screw holes. Emphasis is given to a method for extracting useful features from the scene data for complex industrial parts so that accurate recognition of them is possible. The proposed method has the following features: 1) simple features are detected first in the scene and more complex features are examined later, using the locations of the previously found features; 2) the system is provided with a high-level supervisor that analyzes the current information obtained from the scene and structural models of various objects, and proposes the feartures to be examined next for recognizing the objects in the scene; 3) the supervisor has problem-solving capabilities to select the most promising feature among many others; 4) the structural models are used to suggest the locations of the features to be examined; and 5) several sophisticated feature extractors are used to detect the complex features. An effort is also made to make the system versatile so that it can be readily applied to a variety of different industrial parts. The proposed system has been tested on several sets of parts for small industrial gasoline engines and the results were satisfactory.", "" ] }
1011.1035
2952510816
The problem of identifying the 3D pose of a known object from a given 2D image has important applications in Computer Vision ranging from robotic vision to image analysis. Our proposed method of registering a 3D model of a known object on a given 2D photo of the object has numerous advantages over existing methods: It does neither require prior training nor learning, nor knowledge of the camera parameters, nor explicit point correspondences or matching features between image and model. Unlike techniques that estimate a partial 3D pose (as in an overhead view of traffic or machine parts on a conveyor belt), our method estimates the complete 3D pose of the object, and works on a single static image from a given view, and under varying and unknown lighting conditions. For this purpose we derive a novel illumination-invariant distance measure between 2D photo and projected 3D model, which is then minimised to find the best pose parameters. Results for vehicle pose detection are presented.
Implicit Shape Models Recent work by Arie-Nachimson and Ronen Basri @cite_17 makes use of implicit shape models' to recognise 3D objects from 2D images. The model consists of a set of learned features, their 3D locations and the views in which they are visible. The learning process is further refined using factorisation methods. The pose estimation consists of evaluating the transformations of the features that give the best match. A typical model requires around 65 images to be trained. There are many different types of cars in use and new car models are manufactured quite frequently. Therefore, any methodology that requires training car models would be laborious and time consuming. Hence, a system that does not require such training is preferred for the problem at hand.
{ "cite_N": [ "@cite_17" ], "mid": [ "2535401692" ], "abstract": [ "We present a system that constructs “implicit shape models” for classes of rigid 3D objects and utilizes these models to estimating the pose of class instances in single 2D images. We use the framework of implicit shape models to construct a voting procedure that allows for 3D transformations and projection and accounts for self occlusion. The model is comprised of a collection of learned features, their 3D locations, their appearances in different views, and the set of views in which they are visible. We further learn the parameters of a model from training images by applying a method that relies on factorization. We demonstrate the utility of the constructed models by applying them in pose estimation experiments to recover the viewpoint of class instances." ] }
1011.1035
2952510816
The problem of identifying the 3D pose of a known object from a given 2D image has important applications in Computer Vision ranging from robotic vision to image analysis. Our proposed method of registering a 3D model of a known object on a given 2D photo of the object has numerous advantages over existing methods: It does neither require prior training nor learning, nor knowledge of the camera parameters, nor explicit point correspondences or matching features between image and model. Unlike techniques that estimate a partial 3D pose (as in an overhead view of traffic or machine parts on a conveyor belt), our method estimates the complete 3D pose of the object, and works on a single static image from a given view, and under varying and unknown lighting conditions. For this purpose we derive a novel illumination-invariant distance measure between 2D photo and projected 3D model, which is then minimised to find the best pose parameters. Results for vehicle pose detection are presented.
Feature-based methods Work done by @cite_13 and later by @cite_3 attempt to simultaneously solve the pose and point correspondence problems. The success of these methods are affected by the quality of the features extracted from the object, which is non-trivial with objects like cars. Our method on the contrary, does not depend on feature extraction.
{ "cite_N": [ "@cite_13", "@cite_3" ], "mid": [ "2165919662", "1752823667" ], "abstract": [ "The problem of pose estimation arises in many areas of computer vision, including object recognition, object tracking, site inspection and updating, and autonomous navigation when scene models are available. We present a new algorithm, called SoftPOSIT, for determining the pose of a 3D object from a single 2D image when correspondences between object points and image points are not known. The algorithm combines the iterative softassign algorithm (Gold and Rangarajan, 1996; , 1998) for computing correspondences and the iterative POSIT algorithm (DeMenthon and Davis, 1995) for computing object pose under a full-perspective camera model. Our algorithm, unlike most previous algorithms for pose determination, does not have to hypothesize small sets of matches and then verify the remaining image points. Instead, all possible matches are treated identically throughout the search for an optimal pose. The performance of the algorithm is extensively evaluated in Monte Carlo simulations on synthetic data under a variety of levels of clutter, occlusion, and image noise. These tests show that the algorithm performs well in a variety of difficult scenarios, and empirical evidence suggests that the algorithm has an asymptotic run-time complexity that is better than previous methods by a factor of the number of image points. The algorithm is being applied to a number of practical autonomous vehicle navigation problems including the registration of 3D architectural models of a city to images, and the docking of small robots onto larger robots.", "Estimating a camera pose given a set of 3D-object and 2D-image feature points is a well understood problem when correspondences are given. However, when such correspondences cannot be established a priori, one must simultaneously compute them along with the pose. Most current approaches to solving this problem are too computationally intensive to be practical. An interesting exception is the SoftPosit algorithm, that looks for the solution as the minimum of a suitable objective function. It is arguably one of the best algorithms but its iterative nature means it can fail in the presence of clutter, occlusions, or repetitive patterns. In this paper, we propose an approach that overcomes this limitation by taking advantage of the fact that, in practice, some prior on the camera pose is often available. We model it as a Gaussian Mixture Model that we progressively refine by hypothesizing new correspondences. This rapidly reduces the number of potential matches for each 3D point and lets us explore the pose space more thoroughly than SoftPosit at a similar computational cost. We will demonstrate the superior performance of our approach on both synthetic and real data." ] }
1011.1401
2124698409
We study an exactly solvable quantum field theory (QFT) model describing interacting fermions in 2+1 dimensions. This model is motivated by physical arguments suggesting that it provides an effective description of spinless fermions on a square lattice with local hopping and density-density interactions if, close to half filling, the system develops a partial energy gap. The necessary regularization of the QFT model is based on this proposed relation to lattice fermions. We use bosonization methods to diagonalize the Hamiltonian and to compute all correlation functions. We also discuss how, after appropriate multiplicative renormalizations, all short- and long distance cutoffs can be removed. In particular, we prove that the renormalized two-point functions have algebraic decay with non-trivial exponents depending on the interaction strengths, which is a hallmark of Luttinger-liquid behavior.
The present paper is the third in a series @cite_0 @cite_2 in which we propose and develop a method to do reliable, non-perturbative computations in models of strongly interacting fermions on a 2D square lattice @cite_49 . We have so far considered the simplest non-trivial case, the so-called 2D model, describing spinless fermions with local interactions. In @cite_0 , a 2D analogue of the Luttinger model was derived from the 2D model using a particular partial continuum limit. The interested reader can find more details in Appendix . This 2D Luttinger model is an extension of the Mattis model by so-called that cannot be bosonized. In @cite_2 we showed that, within a mean field approximation, there is a non-trivial phase away from half-filling in which the antinodal fermions are gapped, and in this phase the 2D Luttinger model reduces to the Mattis model. One key ingredient of our approach is the notion of underlying Fermi surface arcs (not necessarily corresponding to a true Fermi surface) in the 2D model away from half filling. Previous theoretical work predicting four disconnected Fermi surface arcs in 2D interacting lattice fermion systems include renormalization group studies in @cite_39 @cite_41 .
{ "cite_N": [ "@cite_41", "@cite_39", "@cite_0", "@cite_49", "@cite_2" ], "mid": [ "2037362143", "2070121489", "2000443329", "2146319748", "1981377153" ], "abstract": [ "", "We study a two-dimensional Fermi liquid with a Fermi surface containing the saddle points @math and @math . Including Cooper and Peierls channel contributions leads to a one-loop renormalization group flow to strong coupling for short range repulsive interactions. In a certain parameter range the characteristics of the fixed point, opening of a spin and charge gap and dominant pairing correlations are similar to those of a 2-leg ladder at half-filling. An increase of the electron density we argue leads to a truncation of the Fermi surface with only 4 disconnected arcs remaining.", "A detailed derivation of a two dimensional (2D) low energy effective model for spinless fermions on a square lattice with local interactions is given. This derivation utilizes a particular continuum limit that is justified by physical arguments. It is shown that the effective model thus obtained can be treated by exact bosonization methods. It is also discussed how this effective model can be used to obtain physical information about the corresponding lattice fermion system.", "We present a fermion model that is, as we suggest, a natural 2D analogue of the Luttinger model. We derive this model as a partial continuum limit of a 2D spinless lattice fermion system with local interactions and away from half filling. In this derivation, we use certain approximations that we motivate by physical arguments. We also present mathematical results that allow an exact treatment of parts of the degrees of freedom of this model by bosonization, and we propose to treat the remaining degrees of freedom by mean field theory.", "We compute mean field phase diagrams of two closely related interacting fermion models in two spatial dimensions (2D). The first is the so-called 2D t-t′-V model describing spinless fermions on a square lattice with local hopping and density-density interactions. The second is the so-called 2D Luttinger model that provides an effective description of the 2D t-t′-V model and in which parts of the fermion degrees of freedom are treated exactly by bosonization. In mean field theory, both models have a charge-density-wave (CDW) instability making them gapped at half-filling. The 2D t-t′-V model has a significant parameter regime away from half-filling where neither the CDW nor the normal state are thermodynamically stable. We show that the 2D Luttinger model allows to obtain more detailed information about this mixed region. In particular, we find in the 2D Luttinger model a partially gapped phase that, as we argue, can be described by an exactly solvable model." ] }
1011.1401
2124698409
We study an exactly solvable quantum field theory (QFT) model describing interacting fermions in 2+1 dimensions. This model is motivated by physical arguments suggesting that it provides an effective description of spinless fermions on a square lattice with local hopping and density-density interactions if, close to half filling, the system develops a partial energy gap. The necessary regularization of the QFT model is based on this proposed relation to lattice fermions. We use bosonization methods to diagonalize the Hamiltonian and to compute all correlation functions. We also discuss how, after appropriate multiplicative renormalizations, all short- and long distance cutoffs can be removed. In particular, we prove that the renormalized two-point functions have algebraic decay with non-trivial exponents depending on the interaction strengths, which is a hallmark of Luttinger-liquid behavior.
One of our main tenets is that progress in understanding these lattice models can be achieved if one clearly distinguishes between approximations justified by physical arguments and manipulations that are mathematically exact (rigorous); the present paper belongs to the latter category. We mention that the model derived in @cite_0 corresponds to the special case @math ; the generalization to different coupling parameters, allowing also for negative values, is natural from a mathematical point of view. We also note that, while an outline of how to diagonalize the Mattis Hamiltonian by bosonization was already given in @cite_0 , Section 6.2, the details and complete solution of the model are given here for the first time.
{ "cite_N": [ "@cite_0" ], "mid": [ "2000443329" ], "abstract": [ "A detailed derivation of a two dimensional (2D) low energy effective model for spinless fermions on a square lattice with local interactions is given. This derivation utilizes a particular continuum limit that is justified by physical arguments. It is shown that the effective model thus obtained can be treated by exact bosonization methods. It is also discussed how this effective model can be used to obtain physical information about the corresponding lattice fermion system." ] }
1011.1401
2124698409
We study an exactly solvable quantum field theory (QFT) model describing interacting fermions in 2+1 dimensions. This model is motivated by physical arguments suggesting that it provides an effective description of spinless fermions on a square lattice with local hopping and density-density interactions if, close to half filling, the system develops a partial energy gap. The necessary regularization of the QFT model is based on this proposed relation to lattice fermions. We use bosonization methods to diagonalize the Hamiltonian and to compute all correlation functions. We also discuss how, after appropriate multiplicative renormalizations, all short- and long distance cutoffs can be removed. In particular, we prove that the renormalized two-point functions have algebraic decay with non-trivial exponents depending on the interaction strengths, which is a hallmark of Luttinger-liquid behavior.
We choose the name of the model studied in this paper to acknowledge a pioneering paper published by Mattis in 1987 in which he pointed out that a 2D interacting fermion Hamiltonian similar to ours can be mapped exactly to a non-interacting boson Hamiltonian @cite_1 ; the dispersion relations of this boson Hamiltonian were given by Mattis (see @cite_1 , Equation (8)) but no details were provided. Mattis also argued that this model can arise from a tight-binding description of 2D lattice fermions at half-filling with a square Fermi surface and in the absence of nesting-type instabilities, but, as shown in @cite_0 , this latter proposal is doubtful (at half-filling, there are additional interaction terms that cannot be bosonized in a simple manner and which are likely to yield a gap, very different to what the Mattis model describes).
{ "cite_N": [ "@cite_0", "@cite_1" ], "mid": [ "2000443329", "2093079154" ], "abstract": [ "A detailed derivation of a two dimensional (2D) low energy effective model for spinless fermions on a square lattice with local interactions is given. This derivation utilizes a particular continuum limit that is justified by physical arguments. It is shown that the effective model thus obtained can be treated by exact bosonization methods. It is also discussed how this effective model can be used to obtain physical information about the corresponding lattice fermion system.", "We consider the tight-binding energy band in sq lattices and determine that in the half-filled case there exists an infrared instability in addition to the 2k sub F (nesting)-type instability. In view of the pseudo-two-dimensional band structure of La sub 2- sub x Ba sub x CuO sub 4 recently proposed by and by Mattheiss, our conclusions should be relevant to this material at least, one of which demonstrates the remarkable phenomenon of high-T sub c superconductivity. To further analyze it, we devise a two-dimensional bosonization scheme. Three-dimensional bosonization is also briefly discussed." ] }
1011.1401
2124698409
We study an exactly solvable quantum field theory (QFT) model describing interacting fermions in 2+1 dimensions. This model is motivated by physical arguments suggesting that it provides an effective description of spinless fermions on a square lattice with local hopping and density-density interactions if, close to half filling, the system develops a partial energy gap. The necessary regularization of the QFT model is based on this proposed relation to lattice fermions. We use bosonization methods to diagonalize the Hamiltonian and to compute all correlation functions. We also discuss how, after appropriate multiplicative renormalizations, all short- and long distance cutoffs can be removed. In particular, we prove that the renormalized two-point functions have algebraic decay with non-trivial exponents depending on the interaction strengths, which is a hallmark of Luttinger-liquid behavior.
Mattis' insight described above has received little attention up to now, and it was re-discovered in 1994 independently by Hlubina @cite_33 and Luther @cite_37 . Hlubina presented a model similar to Mattis' that he claimed can be mapped to a non-interacting boson Hamiltonian, and he diagonalized the latter. He also gave phenomenological arguments suggesting that this model describes a two-dimensional Luttinger liquid. However, from his discussion, it is not clear what the solvable model actually is (the model formulated in @cite_33 , Equation (1), is not exactly solvable but needs to be modified --- the necessary changes are similar to ones discussed in detail in @cite_0 ). Luther studied spinfull fermions with a square Fermi surface and argued that the non-interacting part of the Hamiltonian can be simplified to one that can be bosonized exactly, and which is essentially the same as in Mattis' model. Luther diagonalized the bosonized Hamiltonian that one obtains by also including density-density interactions, initially restricting to interactions between opposite Fermi surface faces only @cite_37 (this restriction corresponds to @math in our notation). This was later extended to also include interactions between adjacent faces in @cite_9 .
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_37", "@cite_33" ], "mid": [ "2000443329", "1548897448", "2000336399", "2054299528" ], "abstract": [ "A detailed derivation of a two dimensional (2D) low energy effective model for spinless fermions on a square lattice with local interactions is given. This derivation utilizes a particular continuum limit that is justified by physical arguments. It is shown that the effective model thus obtained can be treated by exact bosonization methods. It is also discussed how this effective model can be used to obtain physical information about the corresponding lattice fermion system.", "Interacting electrons with a square Fermi surface are investigated from a bosonic point of view taking into account electron scattering between all faces of the square. Fermion operators are classified according to their dimensions and the stability of the boson fixed-point is investigated. In particular we find, in contrast to previous studies, that the square Fermi surface is unstable to doping in the case of no spin gap and microscopic Hubbard interactions.", "Electronic states near a square Fermi surface are mapped onto quantum chains. Using boson-fermion duality on the chains, the bosonic part of the interaction is isolated and diagonalized. These interactions destroy Fermi-liquid behavior. Nonboson interactions are also generated by this mapping, and give rise to an alternate perturbation theory about the boson problem. A case with strong repulsions between parallel faces is studied and solved. This solution discards irrelevant operators in a new interaction Hamiltonian. There is spin-charge separation and the square Fermi surface remains square under doping. At half-filling, there is a charge gap and insulating behavior together with gapless spin excitations. This mapping appears to be a general tool for understanding the properties of interacting electrons on a square Fermi surface.", "We consider spinless electrons in two dimensions with the bare spectrum @math . In momentum space, the interactions among electrons have a finite range @math , which is small compared to the Fermi momentum. A golden rule calculation of the electron lifetime indicates a breakdown of the Landau Fermi liquid in the model. At the one-loop level of perturbation theory, we show that the density wave and the superconducting instabilities cancel each other and there is no symmetry breaking. We solve the model via bosonization; the excitation spectrum is found to consist of gapless bosonic modes as in a one-dimensional Luttinger liquid." ] }
1011.1401
2124698409
We study an exactly solvable quantum field theory (QFT) model describing interacting fermions in 2+1 dimensions. This model is motivated by physical arguments suggesting that it provides an effective description of spinless fermions on a square lattice with local hopping and density-density interactions if, close to half filling, the system develops a partial energy gap. The necessary regularization of the QFT model is based on this proposed relation to lattice fermions. We use bosonization methods to diagonalize the Hamiltonian and to compute all correlation functions. We also discuss how, after appropriate multiplicative renormalizations, all short- and long distance cutoffs can be removed. In particular, we prove that the renormalized two-point functions have algebraic decay with non-trivial exponents depending on the interaction strengths, which is a hallmark of Luttinger-liquid behavior.
An important question is whether the Mattis model describes a 2D Luttinger liquid @cite_5 . Different regularizations and treatments of Mattis-like models can give different results, as exemplified by Refs. @cite_3 and @cite_44 : these works are on such models that differ in regularization details of the interaction, they use different methods, and they obtain different results. This suggests to us that it is important to be mathematically precise both in the definition of the model and of Luttinger-liquid behavior, as well as in the methods used to study them.
{ "cite_N": [ "@cite_44", "@cite_5", "@cite_3" ], "mid": [ "2055896037", "2093475895", "1579142263" ], "abstract": [ "We study instabilities occurring in the electron system whose Fermi surface has flat regions on its opposite sides. Such a Fermi surface resembles Fermi surfaces of some high- @math superconductors. In the framework of the parquet approximation, we classify possible instabilities and derive renormalization-group equations that determine the evolution of corresponding susceptibilities with decreasing temperature. Numerical solutions of the parquet equations are found to be in qualitative agreement with a ladder approximation. For the repulsive Hubbard interaction, the antiferromagnetic (spin-density-wave) instability dominates, but when the Fermi surface is not perfectly flat, the d-wave superconducting instability takes over.", "Analysis of interacting fermion systems shows that there are two fundamentally different fixed points, Fermi-liquid theory and «Luttinger-liquid theory» (Haldane), a state in which charge and spin acquire distinct spectra and correlations have unusual exponents. The Luttinger liquids include most interacting one-dimensional systems, and some higher- (especially two) dimensional systems in which the band spectrum is bounded above: systems with Mott-Hubbard gaps and an upper Hubbard band. We give a theory which is useful in calculating normal-state, and some superconducting, properties of high-T c superconductors", "We consider a system of two-dimensional interacting fermions with a flat Fermi surface. The apparent conflict between Luttinger and non-Luttinger liquid behaviors found through different approximations is resolved by showing the existence of a line of nontrivial fixed points, for the renormalization group (RG) flow, corresponding to Luttinger liquid behavior; the presence of marginally relevant operators can cause flow away from the fixed point. The analysis is nonperturbative and based on the implementation, at each RG iteration, of Ward identities obtained from local phase transformations depending on the Fermi surface side, implying the partial vanishing of the beta function." ] }
1011.1161
1580784396
In this paper we initiate the study of optimization of bandit type problems in scenarios where the feedback of a play is not immediately known. This arises naturally in allocation problems which have been studied extensively in the literature, albeit in the absence of delays in the feedback. We study this problem in the Bayesian setting. In presence of delays, no solution with provable guarantees is known to exist with sub-exponential running time. We show that bandit problems with delayed feedback that arise in allocation settings can be forced to have significant structure, with a slight loss in optimality. This structure gives us the ability to reason about the relationship of single arm policies to the entangled optimum policy, and eventually leads to a O(1) approximation for a significantly general class of priors. The structural insights we develop are of key interest and carry over to the setting where the feedback of an action is available instantaneously, and we improve all previous results in this setting as well.
There is an extensive literature on the MAB problem in the prior-free setting; see @cite_29 @cite_28 @cite_30 ), and policies with additive regret guarantees are known. Regret is the difference between the expected reward of the policy and the reward of an omniscient policy which knows all the distributions. However, these results both require the reward rate to be large and large time horizon @math compared the number of arms. In the application scenarios mentioned above, it will typically be the case that the number of arms is very large and comparable to the optimization horizon and the reward rates are low. This motivates the need for a purely multiplicative guarantee instead of additive guarantees. Moreover the analysis of these policies require the plays of the arm with maximum estimated reward to be continuous, which is not true in presence of delays (or budgets). If the delay of arm @math satisfies @math (where @math is the number of arms) then standard explore-then-exploit (where we play an arm long enough to start receiving the outcomes) strategy gives sub-linear in @math regret. However, constants in the regret term depends on scaling of the rewards.
{ "cite_N": [ "@cite_28", "@cite_29", "@cite_30" ], "mid": [ "2168405694", "", "1570963478" ], "abstract": [ "Reinforcement learning policies face the exploration versus exploitation dilemma, i.e. the search for a balance between exploring the environment to find profitable actions while taking the empirically best action as often as possible. A popular measure of a policy's success in addressing this dilemma is the regret, that is the loss due to the fact that the globally optimal policy is not followed all the times. One of the simplest examples of the exploration exploitation dilemma is the multi-armed bandit problem. Lai and Robbins were the first ones to show that the regret for this problem has to grow at least logarithmically in the number of plays. Since then, policies which asymptotically achieve this regret have been devised by Lai and Robbins and many others. In this work we show that the optimal logarithmic regret is also achievable uniformly over time, with simple and efficient policies, and for all reward distributions with bounded support.", "", "1. Introduction 2. Prediction with expert advice 3. Tight bounds for specific losses 4. Randomized prediction 5. Efficient forecasters for large classes of experts 6. Prediction with limited feedback 7. Prediction and playing games 8. Absolute loss 9. Logarithmic loss 10. Sequential investment 11. Linear pattern recognition 12. Linear classification 13. Appendix." ] }
1011.1296
2949333025
Suppose we would like to know all answers to a set of statistical queries C on a data set up to small error, but we can only access the data itself using statistical queries. A trivial solution is to exhaustively ask all queries in C. Can we do any better? + We show that the number of statistical queries necessary and sufficient for this task is---up to polynomial factors---equal to the agnostic learning complexity of C in Kearns' statistical query (SQ) model. This gives a complete answer to the question when running time is not a concern. + We then show that the problem can be solved efficiently (allowing arbitrary error on a small fraction of queries) whenever the answers to C can be described by a submodular function. This includes many natural concept classes, such as graph cuts and Boolean disjunctions and conjunctions. While interesting from a learning theoretic point of view, our main applications are in privacy-preserving data analysis: Here, our second result leads to the first algorithm that efficiently releases differentially private answers to of all Boolean conjunctions with 1 average error. This presents significant progress on a key open problem in privacy-preserving data analysis. Our first result on the other hand gives unconditional lower bounds on any differentially private algorithm that admits a (potentially non-privacy-preserving) implementation using only statistical queries. Not only our algorithms, but also most known private algorithms can be implemented using only statistical queries, and hence are constrained by these lower bounds. Our result therefore isolates the complexity of agnostic learning in the SQ-model as a new barrier in the design of differentially private algorithms.
The problem of learning submodular functions was recently introduced by Balcan and Harvey @cite_2 ; their PAC-style definition was different from previously studied point-wise learning approaches @cite_7 @cite_12 . For product distributions, Balcan and Harvey give an algorithm for learning monotone, Lipschitz continuous submodular functions up to constant error using only random examples. @cite_2 also give strong lower bounds and matching algorithmic results for non-product distributions. Our main algorithmic result is similar in spirit, and is inspired by their concentration-of-measure approach. Our model is different from theirs, which makes our results incomparable. We introduce a decomposition that allows us to learn arbitrary (i.e. potentially non-Lipschitz, non-monotone) submodular functions to constant error. Moreover, our decomposition makes value queries to the submodular function, which are prohibited in the model studied by @cite_2 .
{ "cite_N": [ "@cite_12", "@cite_7", "@cite_2" ], "mid": [ "2106752752", "2278268662", "1642458096" ], "abstract": [ "We introduce several generalizations of classical computer science problems obtained by replacing simpler objective functions with general submodular functions.The new problems include submodular load balancing, which generalizes load balancing or minimum-makespan scheduling, submodular sparsest cut and submodular balanced cut, which generalize their respective graph cut problems, as well as submodular function minimization with a cardinality lower bound. We establish upper and lower bounds for the approximability of these problems with a polynomial number of queries to a function-value oracle.The approximation guarantees for most of our algorithms are of the order of radic(n ln n). We show that this is the inherent difficulty of the problems by proving matching lower bounds.We also give an improved lower bound for the problem of approximately learning a monotone submodular function. In addition, we present an algorithm for approximately learning submodular functions with special structure, whose guarantee is close to the lower bound. Although quite restrictive, the class of functions with this structure includes the ones that are used for lower bounds both by us and in previous work. This demonstrates that if there are significantly stronger lower bounds for this problem, they rely on more general submodular functions.", "", "Submodular functions are discrete functions that model laws of diminishing returns and enjoy numerous algorithmic applications. They have been used in many areas, including combinatorial optimization, machine learning, and economics. In this work we study submodular functions from a learning theoretic angle. We provide algorithms for learning submodular functions, as well as lower bounds on their learnability. In doing so, we uncover several novel structural results revealing ways in which submodular functions can be both surprisingly structured and surprisingly unstructured. We provide several concrete implications of our work in other domains including algorithmic game theory and combinatorial optimization. At a technical level, this research combines ideas from many areas, including learning theory (distributional learning and PAC-style analyses), combinatorics and optimization (matroids and submodular functions), and pseudorandomness (lossless expander graphs)." ] }
1011.1296
2949333025
Suppose we would like to know all answers to a set of statistical queries C on a data set up to small error, but we can only access the data itself using statistical queries. A trivial solution is to exhaustively ask all queries in C. Can we do any better? + We show that the number of statistical queries necessary and sufficient for this task is---up to polynomial factors---equal to the agnostic learning complexity of C in Kearns' statistical query (SQ) model. This gives a complete answer to the question when running time is not a concern. + We then show that the problem can be solved efficiently (allowing arbitrary error on a small fraction of queries) whenever the answers to C can be described by a submodular function. This includes many natural concept classes, such as graph cuts and Boolean disjunctions and conjunctions. While interesting from a learning theoretic point of view, our main applications are in privacy-preserving data analysis: Here, our second result leads to the first algorithm that efficiently releases differentially private answers to of all Boolean conjunctions with 1 average error. This presents significant progress on a key open problem in privacy-preserving data analysis. Our first result on the other hand gives unconditional lower bounds on any differentially private algorithm that admits a (potentially non-privacy-preserving) implementation using only statistical queries. Not only our algorithms, but also most known private algorithms can be implemented using only statistical queries, and hence are constrained by these lower bounds. Our result therefore isolates the complexity of agnostic learning in the SQ-model as a new barrier in the design of differentially private algorithms.
@cite_0 introduced the and models of privacy and gave information theoretic characterizations for which classes of functions could be in these models: they showed that information theoretically, the class of functions that can be learned in the centralized model of privacy is equivalent to the class of functions that can be agnostically PAC learned, and the class of functions that can be learned in the local privacy model is equivalent to the class of functions that can be learned in the SQ model of Kearns @cite_9 .
{ "cite_N": [ "@cite_0", "@cite_9" ], "mid": [ "2245160765", "1995897489" ], "abstract": [ "Learning problems form an important category of computational tasks that generalizes many of the computations researchers apply to large real-life data sets. We ask, What concept classes can be learned privately, namely, by an algorithm whose output does not depend too heavily on any one input or specific training example? More precisely, we investigate learning algorithms that satisfy differential privacy, a notion that provides strong confidentiality guarantees in contexts where aggregate information is released about a database containing sensitive information about individuals. Our goal is a broad understanding of the resources required for private learning in terms of samples, computation time, and interaction. We demonstrate that, ignoring computational constraints, it is possible to privately agnostically learn any concept class using a sample size approximately logarithmic in the cardinality of the concept class. Therefore, almost anything learnable is learnable privately: specifically, if a concept class is learnable by a (nonprivate) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples. We also present a computationally efficient private probabilistically approximately correct learner for the class of parity functions. This result dispels the similarity between learning with noise and private learning (both must be robust to small changes in inputs), since parity is thought to be very hard to learn given random classification noise. Local (or randomized response) algorithms are a practical class of private algorithms that have received extensive investigation. We provide a precise characterization of local private learning algorithms. We show that a concept class is learnable by a local algorithm if and only if it is learnable in the statistical query (SQ) model. Therefore, for local private learning algorithms, the similarity to learning with noise is stronger: local learning is equivalent to SQ learning, and SQ algorithms include most known noise-tolerant learning algorithms. Finally, we present a separation between the power of interactive and noninteractive local learning algorithms. Because of the equivalence to SQ learning, this result also separates adaptive and nonadaptive SQ learning.", "In this paper, we study the problem of learning in the presence of classification noise in the probabilistic learning model of Valiant and its variants. In order to identify the class of “robust” learning algorithms in the most general way, we formalize a new but related model of learning from statistical queries . Intuitively, in this model a learning algorithm is forbidden to examine individual examples of the unknown target function, but is given acess to an oracle providing estimates of probabilities over the sample space of random examples. One of our main results shows that any class of functions learnable from statistical queries is in fact learnable with classification noise in Valiant's model, with a noise rate approaching the information-theoretic barrier of 1 2. We then demonstrate the generality of the statistical query model, showing that practically every class learnable in Valiant's model and its variants can also be learned in the new model (and thus can be learned in the presence of noise). A notable exception to this statement is the class of parity functions, which we prove is not learnable from statistical queries, and for which no noise-tolerant algorithm is known." ] }
1011.1296
2949333025
Suppose we would like to know all answers to a set of statistical queries C on a data set up to small error, but we can only access the data itself using statistical queries. A trivial solution is to exhaustively ask all queries in C. Can we do any better? + We show that the number of statistical queries necessary and sufficient for this task is---up to polynomial factors---equal to the agnostic learning complexity of C in Kearns' statistical query (SQ) model. This gives a complete answer to the question when running time is not a concern. + We then show that the problem can be solved efficiently (allowing arbitrary error on a small fraction of queries) whenever the answers to C can be described by a submodular function. This includes many natural concept classes, such as graph cuts and Boolean disjunctions and conjunctions. While interesting from a learning theoretic point of view, our main applications are in privacy-preserving data analysis: Here, our second result leads to the first algorithm that efficiently releases differentially private answers to of all Boolean conjunctions with 1 average error. This presents significant progress on a key open problem in privacy-preserving data analysis. Our first result on the other hand gives unconditional lower bounds on any differentially private algorithm that admits a (potentially non-privacy-preserving) implementation using only statistical queries. Not only our algorithms, but also most known private algorithms can be implemented using only statistical queries, and hence are constrained by these lower bounds. Our result therefore isolates the complexity of agnostic learning in the SQ-model as a new barrier in the design of differentially private algorithms.
Blum, Ligett, and Roth @cite_21 considered the problem (the task of releasing the approximate value of all functions in some class) and characterized exactly which classes of functions can be information theoretically released while preserving differential privacy in the model of data privacy. They also posed the question: which classes of functions can be released using mechanisms that have running time only polylogarithmic in the size of the data universe and the class of interest? In particular, they asked if conjunctions were such a class.
{ "cite_N": [ "@cite_21" ], "mid": [ "2169570643" ], "abstract": [ "We demonstrate that, ignoring computational constraints, it is possible to release privacy-preserving databases that are useful for all queries over a discretized domain from any given concept class with polynomial VC-dimension. We show a new lower bound for releasing databases that are useful for halfspace queries over a continuous domain. Despite this, we give a privacy-preserving polynomial time algorithm that releases information useful for all halfspace queries, for a slightly relaxed definition of usefulness. Inspired by learning theory, we introduce a new notion of data privacy, which we call distributional privacy, and show that it is strictly stronger than the prevailing privacy notion, differential privacy." ] }
1011.1296
2949333025
Suppose we would like to know all answers to a set of statistical queries C on a data set up to small error, but we can only access the data itself using statistical queries. A trivial solution is to exhaustively ask all queries in C. Can we do any better? + We show that the number of statistical queries necessary and sufficient for this task is---up to polynomial factors---equal to the agnostic learning complexity of C in Kearns' statistical query (SQ) model. This gives a complete answer to the question when running time is not a concern. + We then show that the problem can be solved efficiently (allowing arbitrary error on a small fraction of queries) whenever the answers to C can be described by a submodular function. This includes many natural concept classes, such as graph cuts and Boolean disjunctions and conjunctions. While interesting from a learning theoretic point of view, our main applications are in privacy-preserving data analysis: Here, our second result leads to the first algorithm that efficiently releases differentially private answers to of all Boolean conjunctions with 1 average error. This presents significant progress on a key open problem in privacy-preserving data analysis. Our first result on the other hand gives unconditional lower bounds on any differentially private algorithm that admits a (potentially non-privacy-preserving) implementation using only statistical queries. Not only our algorithms, but also most known private algorithms can be implemented using only statistical queries, and hence are constrained by these lower bounds. Our result therefore isolates the complexity of agnostic learning in the SQ-model as a new barrier in the design of differentially private algorithms.
There are also several conditional lower bounds on the running time of private mechanisms for solving the query release problem. @cite_23 showed that under cryptographic assumptions, there exists a class of queries that can be privately released using the inefficient mechanism of @cite_21 , but cannot be privately released by any mechanism that runs in time polynomial in the dimension of the data universe (e.g. @math , when the data universe is @math ). Ullman and Vadhan @cite_26 extended this result to the class of conjunctions: they showed that under cryptographic assumptions, no polynomial time mechanism can answer even the set of @math conjunctions of two-literals!
{ "cite_N": [ "@cite_21", "@cite_23", "@cite_26" ], "mid": [ "2169570643", "", "86647242" ], "abstract": [ "We demonstrate that, ignoring computational constraints, it is possible to release privacy-preserving databases that are useful for all queries over a discretized domain from any given concept class with polynomial VC-dimension. We show a new lower bound for releasing databases that are useful for halfspace queries over a continuous domain. Despite this, we give a privacy-preserving polynomial time algorithm that releases information useful for all halfspace queries, for a slightly relaxed definition of usefulness. Inspired by learning theory, we introduce a new notion of data privacy, which we call distributional privacy, and show that it is strictly stronger than the prevailing privacy notion, differential privacy.", "", "Assuming the existence of one-way functions, we show that there is no polynomial-time, dierentially private algorithm A that takes a database D2 (f0; 1g d ) n and outputs a database\" ^ D all of whose two-way marginals are approximately equal to those of D. (A two-way marginal is the fraction of database rows x2f0; 1g d with a given pair of values in a given pair of columns.) This answers a question of (PODS ‘07), who gave an algorithm running in time poly(n; 2 d ). Our proof combines a construction of hard-to-sanitize databases based on digital signatures (by , STOC ‘09) with encodings based on the PCP Theorem. We also present both negative and positive results for generating \" synthetic data, where the fraction of rows in D satisfying a predicate c are estimated by applying c to each row of ^ D and aggregating the results in some way." ] }
1011.1296
2949333025
Suppose we would like to know all answers to a set of statistical queries C on a data set up to small error, but we can only access the data itself using statistical queries. A trivial solution is to exhaustively ask all queries in C. Can we do any better? + We show that the number of statistical queries necessary and sufficient for this task is---up to polynomial factors---equal to the agnostic learning complexity of C in Kearns' statistical query (SQ) model. This gives a complete answer to the question when running time is not a concern. + We then show that the problem can be solved efficiently (allowing arbitrary error on a small fraction of queries) whenever the answers to C can be described by a submodular function. This includes many natural concept classes, such as graph cuts and Boolean disjunctions and conjunctions. While interesting from a learning theoretic point of view, our main applications are in privacy-preserving data analysis: Here, our second result leads to the first algorithm that efficiently releases differentially private answers to of all Boolean conjunctions with 1 average error. This presents significant progress on a key open problem in privacy-preserving data analysis. Our first result on the other hand gives unconditional lower bounds on any differentially private algorithm that admits a (potentially non-privacy-preserving) implementation using only statistical queries. Not only our algorithms, but also most known private algorithms can be implemented using only statistical queries, and hence are constrained by these lower bounds. Our result therefore isolates the complexity of agnostic learning in the SQ-model as a new barrier in the design of differentially private algorithms.
The latter lower bound applies only to the class of mechanisms that output data sets, rather than some other data structure encoding their answers, and only to mechanisms that answer conjunctions of two-literals with small error. In fact, because there are only @math conjunctions of size 2 in total, the hardness result of @cite_26 does not hold if the mechanism is allowed to output some other data structure -- such a mechanism can simply privately query each of the @math questions.
{ "cite_N": [ "@cite_26" ], "mid": [ "86647242" ], "abstract": [ "Assuming the existence of one-way functions, we show that there is no polynomial-time, dierentially private algorithm A that takes a database D2 (f0; 1g d ) n and outputs a database\" ^ D all of whose two-way marginals are approximately equal to those of D. (A two-way marginal is the fraction of database rows x2f0; 1g d with a given pair of values in a given pair of columns.) This answers a question of (PODS ‘07), who gave an algorithm running in time poly(n; 2 d ). Our proof combines a construction of hard-to-sanitize databases based on digital signatures (by , STOC ‘09) with encodings based on the PCP Theorem. We also present both negative and positive results for generating \" synthetic data, where the fraction of rows in D satisfying a predicate c are estimated by applying c to each row of ^ D and aggregating the results in some way." ] }
1011.1296
2949333025
Suppose we would like to know all answers to a set of statistical queries C on a data set up to small error, but we can only access the data itself using statistical queries. A trivial solution is to exhaustively ask all queries in C. Can we do any better? + We show that the number of statistical queries necessary and sufficient for this task is---up to polynomial factors---equal to the agnostic learning complexity of C in Kearns' statistical query (SQ) model. This gives a complete answer to the question when running time is not a concern. + We then show that the problem can be solved efficiently (allowing arbitrary error on a small fraction of queries) whenever the answers to C can be described by a submodular function. This includes many natural concept classes, such as graph cuts and Boolean disjunctions and conjunctions. While interesting from a learning theoretic point of view, our main applications are in privacy-preserving data analysis: Here, our second result leads to the first algorithm that efficiently releases differentially private answers to of all Boolean conjunctions with 1 average error. This presents significant progress on a key open problem in privacy-preserving data analysis. Our first result on the other hand gives unconditional lower bounds on any differentially private algorithm that admits a (potentially non-privacy-preserving) implementation using only statistical queries. Not only our algorithms, but also most known private algorithms can be implemented using only statistical queries, and hence are constrained by these lower bounds. Our result therefore isolates the complexity of agnostic learning in the SQ-model as a new barrier in the design of differentially private algorithms.
We circumvent the hardness result of @cite_26 by outputting a data structure rather than a synthetic data set, and by releasing all conjunctions with small error. Although there are no known computational lower bounds for releasing conjunctions with small average error, even for algorithms that output a data set, since our algorithm does not output a data set, our approach may be useful in circumventing the lower bounds of @cite_26 .
{ "cite_N": [ "@cite_26" ], "mid": [ "86647242" ], "abstract": [ "Assuming the existence of one-way functions, we show that there is no polynomial-time, dierentially private algorithm A that takes a database D2 (f0; 1g d ) n and outputs a database\" ^ D all of whose two-way marginals are approximately equal to those of D. (A two-way marginal is the fraction of database rows x2f0; 1g d with a given pair of values in a given pair of columns.) This answers a question of (PODS ‘07), who gave an algorithm running in time poly(n; 2 d ). Our proof combines a construction of hard-to-sanitize databases based on digital signatures (by , STOC ‘09) with encodings based on the PCP Theorem. We also present both negative and positive results for generating \" synthetic data, where the fraction of rows in D satisfying a predicate c are estimated by applying c to each row of ^ D and aggregating the results in some way." ] }
1011.1296
2949333025
Suppose we would like to know all answers to a set of statistical queries C on a data set up to small error, but we can only access the data itself using statistical queries. A trivial solution is to exhaustively ask all queries in C. Can we do any better? + We show that the number of statistical queries necessary and sufficient for this task is---up to polynomial factors---equal to the agnostic learning complexity of C in Kearns' statistical query (SQ) model. This gives a complete answer to the question when running time is not a concern. + We then show that the problem can be solved efficiently (allowing arbitrary error on a small fraction of queries) whenever the answers to C can be described by a submodular function. This includes many natural concept classes, such as graph cuts and Boolean disjunctions and conjunctions. While interesting from a learning theoretic point of view, our main applications are in privacy-preserving data analysis: Here, our second result leads to the first algorithm that efficiently releases differentially private answers to of all Boolean conjunctions with 1 average error. This presents significant progress on a key open problem in privacy-preserving data analysis. Our first result on the other hand gives unconditional lower bounds on any differentially private algorithm that admits a (potentially non-privacy-preserving) implementation using only statistical queries. Not only our algorithms, but also most known private algorithms can be implemented using only statistical queries, and hence are constrained by these lower bounds. Our result therefore isolates the complexity of agnostic learning in the SQ-model as a new barrier in the design of differentially private algorithms.
Recently, Roth and Roughgarden @cite_14 and Hardt and Rothblum @cite_6 gave interactive private query release mechanisms that allow a data analyst to ask a large number of questions, while only expending their privacy budgets slowly. Their privacy analyses depend on the fact that only a small fraction of the queries asked necessitate updating the internal state of the algorithm. However, to answer large classes of queries, these algorithms need to make a large number of statistical queries to the database, even though only a small number of statistical queries result in update steps! Intuitively, our characterization of the query complexity of the release problem in the SQ model is based on two observations: first, that it would be possible to implement these interactive mechanisms using only a small number of statistical queries if the data analyst was able to ask only those queries that would result in update steps, and second, that finding queries that induce large update steps is exactly the problem of agnostic learning.
{ "cite_N": [ "@cite_14", "@cite_6" ], "mid": [ "2951691640", "1985310469" ], "abstract": [ "We define a new interactive differentially private mechanism -- the median mechanism -- for answering arbitrary predicate queries that arrive online. Relative to fixed accuracy and privacy constraints, this mechanism can answer exponentially more queries than the previously best known interactive privacy mechanism (the Laplace mechanism, which independently perturbs each query result). Our guarantee is almost the best possible, even for non-interactive privacy mechanisms. Conceptually, the median mechanism is the first privacy mechanism capable of identifying and exploiting correlations among queries in an interactive setting. We also give an efficient implementation of the median mechanism, with running time polynomial in the number of queries, the database size, and the domain size. This efficient implementation guarantees privacy for all input databases, and accurate query results for almost all input databases. The dependence of the privacy on the number of queries in this mechanism improves over that of the best previously known efficient mechanism by a super-polynomial factor, even in the non-interactive setting.", "We consider statistical data analysis in the interactive setting. In this setting a trusted curator maintains a database of sensitive information about individual participants, and releases privacy-preserving answers to queries as they arrive. Our primary contribution is a new differentially private multiplicative weights mechanism for answering a large number of interactive counting (or linear) queries that arrive online and may be adaptively chosen. This is the first mechanism with worst-case accuracy guarantees that can answer large numbers of interactive queries and is efficient (in terms of the runtime's dependence on the data universe size). The error is asymptotically in its dependence on the number of participants, and depends only logarithmically on the number of queries being answered. The running time is nearly linear in the size of the data universe. As a further contribution, when we relax the utility requirement and require accuracy only for databases drawn from a rich class of databases, we obtain exponential improvements in running time. Even in this relaxed setting we continue to guarantee privacy for any input database. Only the utility requirement is relaxed. Specifically, we show that when the input database is drawn from a smooth distribution — a distribution that does not place too much weight on any single data item — accuracy remains as above, and the running time becomes poly-logarithmic in the data universe size. The main technical contributions are the application of multiplicative weights techniques to the differential privacy setting, a new privacy analysis for the interactive setting, and a technique for reducing data dimensionality for databases drawn from smooth distributions." ] }
1011.1783
1568267803
This paper presents the current state of an ongoing research project to improve the performance of the OCaml byte-code interpreter using Just-In-Time native code generation. Our JIT engine OCamlJIT2 currently runs on x86-64 processors, mimicing precisely the behavior of the OCaml virtual machine. Its design and implementation is described, and performance measures are given.
Just-In-Time compilation is an active research field @cite_27 @cite_28 . A lot of work has been done on efficient dynamic compilation in the field of the Java Virtual Machine (JVM) and the Common Language Runtime (CLR), which is part of the .NET framework. JIT compilation of Java or .NET byte-code is usually driven by first interpreting the byte-code, collecting profiling information, and selecting the appropriate methods to optimize by inspecting the collected profiling data @cite_15 @cite_23 .
{ "cite_N": [ "@cite_28", "@cite_27", "@cite_23", "@cite_15" ], "mid": [ "", "2143588523", "2083400053", "2135740686" ], "abstract": [ "", "Virtual machines face significant performance challenges beyond those confronted by traditional static optimizers. First, portable program representations and dynamic language features, such as dynamic class loading, force the deferral of most optimizations until runtime, inducing runtime optimization overhead. Second, modular program representations preclude many forms of whole-program interprocedural optimization. Third, virtual machines incur additional costs for runtime services such as security guarantees and automatic memory management. To address these challenges, vendors have invested considerable resources into adaptive optimization systems in production virtual machines. Today, mainstream virtual machine implementations include substantial infrastructure for online monitoring and profiling, runtime compilation, and feedback-directed optimization. As a result, adaptive optimization has begun to mature as a widespread production-level technology. This paper surveys the evolution and current state of adaptive optimization technology in virtual machines.", "The paper describes the design and implementation of an adaptive recompilation framework for Rotor, a shared source implementation of the common language infrastructure (CLI) that can increase program performance through intelligent recompilation decisions and optimisations based on the program's past behaviour. Our extensions to Rotor include a low overhead run-time-stack based sampling profiler that identifies program hotspots. At the first level of optimisation, the compiler uses a fast yet effective linear scan algorithm for register allocation. Hot methods can be instrumented to collect basic-block, edge and call-graph profile information. Profile-guided optimisations driven by online profile information are used to further optimise heavily executed methods at the second level of recompilation. An evaluation of the framework using a set of test programs shows that performance can improve by a maximum of 42.3 and by 9 on average. Our results also show that the overheads of collecting accurate profile information through instrumentation to an extent outweigh the benefits of profile-guided optimisations in our implementation, suggesting the need for implementing techniques that can reduce such overheads. A flexible and extensible framework design implies that additional profiling and optimisation techniques can easily be incorporated to further improve performance.", "While the concept of online profile directed dynamic optimizations using hardware performance monitoring unit (PMU) data is not new, it has seen fairly limited or no use in commercial JVMs. The main reason behind this fact is the set of significant challenges involved in (1) obtaining low overhead and usable profiling support from the underlying platform (2) the complexity of filtering, interpreting and using precise PMU events online in a JVM environment (3) demonstrating the total runtime benefit of PMU data based optimizations above and beyond regular online profile based optimizations. In this paper we address all three challenges by presenting a practical framework for PMU data collection and use within a high performance product JVM on a highly scalable server platform. Our experiments with JavaTM workloads using the SunTM HotspotTM JDK 1.6 JVM on the Intel ® Itanium ® platform indicate that the hardware data collection overhead (less than 0.5 ) is not as significant as the challenge of extracting the precise information for optimization purposes. We demonstrate the feasibility of mapping the instruction IP address based hardware event information to the runtime components as well as the JIT server compiler internal data structures for use in optimizations within a dynamic environment. We also evaluated the additional performance potential of optimizations such as object co-location during garbage collection and global instruction scheduling in the JIT compiler with the use of PMU generated load latency information. Experimental results show performance improvements of up to 14 with an average of 2.2 across select Java server benchmarks such as SPECjbb2005[16], SPECjvm2008[17] and Dacapo[18]. These benefits were observed over and above those provided by profile guided server JVM optimizations in the absence of hardware PMU data." ] }
1011.1783
1568267803
This paper presents the current state of an ongoing research project to improve the performance of the OCaml byte-code interpreter using Just-In-Time native code generation. Our JIT engine OCamlJIT2 currently runs on x86-64 processors, mimicing precisely the behavior of the OCaml virtual machine. Its design and implementation is described, and performance measures are given.
More recent examples include the Lua programming language @cite_22 @cite_17 , a powerful, fast, lightweight, embeddable scripting language, accompanied by LuaJIT @cite_10 , which combines a high-performance interpreter with a state-of-the-art Just-In-Time compiler.
{ "cite_N": [ "@cite_10", "@cite_22", "@cite_17" ], "mid": [ "", "2091731831", "1981404401" ], "abstract": [ "", "We report on the birth and evolution of Lua and discuss how it moved from a simple configuration language to a versatile, widely used language that supports extensible semantics, anonymous functions, full lexical scoping, proper tail calls, and coroutines.", "This paper describes Lua, a language for extending applications. Lua combines procedural features with powerful data description facilities, by using a simple, yet powerful, mechanism of tables . This mechanism implements the concepts of records, arrays and recursive data types (pointers), and adds some object-oriented facilities, such as methods with dynamic dispatching. Lua presents a mechanism of fallbacks that allows programmers to extend the semantics of the language in some unconventional ways. As a noteworthy example, fallbacks allow the user to add different kinds of inheritance to the language. Currently, Lua is being extensively used in production for several tasks, including user configuration, general-purpose data-entry, description of user interfaces, storage of structured graphical metafiles, and generic attribute configuration for finite element meshes." ] }
1011.0313
2950748160
It is well-known that the spacetime diagrams of some cellular automata have a fractal structure: for instance Pascal's triangle modulo 2 generates a Sierpinski triangle. Explaining the fractal structure of the spacetime diagrams of cellular automata is a much explored topic, but virtually all of the results revolve around a special class of automata, whose typical features include irreversibility, an alphabet with a ring structure, a global evolution that is a ring homomorphism, and a property known as (weakly) p-Fermat. The class of automata that we study in this article has none of these properties. Their cell structure is weaker, as it does not come with a multiplication, and they are far from being p-Fermat, even weakly. However, they do produce fractal spacetime diagrams, and we explain why and how.
[ @cite_7 ] Macfarlane uses Willson's approach and generalizes parts of it to some examples of matrix-valued CAs, including @math . However, the transition matrix is obtained heuristically --- by scrutiny of figure 9'' --- from the spacetime diagram, instead of being algorithmically derived from the transition rule (as in the present work). The conclusion (section 6) suggests that the analysis of @math is easily generalizable to matrices of various sizes over various rings, so in a sense the present article is but an elaboration of the concluding remark of @cite_7 , although we have to say we do not find this generalization to be that obvious.
{ "cite_N": [ "@cite_7" ], "mid": [ "2029310694" ], "abstract": [ "Linear or one-dimensional reversible second-order cellular automata, exemplified by three cases named as RCA1–3, are introduced. Displays of their evolution in discrete time steps, , from their simplest initial states and on the basis of updating rules in modulo 2 arithmetic, are presented. In these, shaded and unshaded squares denote cells whose cell variables are equal to one and zero respectively. This paper is devoted to finding general formulas for, and explicit numerical evaluations of, the weights N(t) of the states or configurations of RCA1–3, i.e. the total number of shaded cells in tth line of their displays. This is achieved by means of the replacement of RCA1–3 by the equivalent linear first-order matrix automata MCA1–3, for which the cell variables are matrices, instead of just numbers () as for RCA1–3. MCA1–3 are tractable because it has been possible to generalize to them the heavy duty methods already well-developed for ordinary first-order cellular automata like those of Wolfram's Rules 90 and 150. While the automata MCA1–3 are thought to be of genuine interest in their own right, with untapped further mathematical potential, their treatment has been applied here to expediting derivation of a large body of general and explicit results for N(t) for RCA1–3. Amongst explicit results obtained are formulas also for each of RCA1–3 for the total weight of the configurations of the first times, ." ] }
1010.5128
2949136294
Low-power and Lossy Networks (LLNs), like wireless networks based upon the IEEE 802.15.4 standard, have strong energy constraints, and are moreover subject to frequent transmission errors, not only due to congestion but also to collisions and to radio channel conditions. This paper introduces an analytical model to compute the total energy consumption in an LLN due to the TCP protocol. The model allows us to highlight some tradeoffs as regards the choice of the TCP maximum segment size, of the Forward Error Correction (FEC) redundancy ratio, and of the number of link-layer retransmissions, in order to minimize the total energy consumption.
@cite_0 and @cite_18 present an analytical study of a TCP optimization problem in an hybrid wired wireless network where the last hop is a wireless link. The authors of both papers define an utility function which is the ratio of the throughput to the cost of a TCP connection. Our work completes these studies by introducing a multi-hop model for computing the energy consumption. Note that in this paper we are not interested in TCP throughput, because data rates are a secondary concern in LLNs; instead, we focus mainly on the energy costs of a TCP connection over multiple lossy links.
{ "cite_N": [ "@cite_0", "@cite_18" ], "mid": [ "1518923351", "2167583087" ], "abstract": [ "TCP performance degrades when end-to-end connections extend over wireless connections — links which are characterized by high bit error rate and intermittent connectivity. Such link characteristics can significantly degrade TCP performance as the TCP sender assumes wireless losses to be congestion losses resulting in unnecessary congestion control actions. Link errors can be reduced by increasing transmission power, code redundancy (FEC) or number of retransmissions (ARQ). But increasing power costs resources, increasing code redundancy reduces available channel bandwidth and increasing persistency increases end-to-end delay. The paper proposes a TCP optimization through proper tuning of power management, FEC and ARQ in wireless environments (WLAN and WWAN). In particular, we conduct analytical and numerical analysis taking into account the three aforementioned factors, and evaluate TCP (and “wireless-aware” TCP) performance under different settings. Our results show that increasing power, redundancy and or retransmission levels always improves TCP performance by reducing link-layer losses. However, such improvements are often associated with cost and arbitrary improvement cannot be realized without paying a lot in return. It is therefore important to consider some kind of net utility function that should be optimized, thus maximizing throughput at the least possible cost.", "It is well known that TCP has performance problems when wireless links are involved in the end-to-end connection. This is due to the high bit error rate characterizing wireless links. Appropriate power management and error correction can improve the link reliability observed by TCP and increase the throughput performance accordingly. In the literature, the effects of transmission power and error correction capability on TCP performance have been considered separately, so far. We study the tradeoff between power management and error correction. To this end, an analytical framework to maximize a user satisfaction function, defined as the ratio between the TCP throughput and a cost function, is introduced. The proposed analytical framework does not depend on the specific wireless system and does not rely on any TCP throughput approximation formula. The benefits of joint power management and error control are demonstrated in several relevant case studies." ] }
1010.5644
2140766031
Multiple-input double-output (MIDO) codes are important in the near-future wireless communications, where the portable end-user device is physically small and will typically contain at most two receive antennas. Especially tempting is the 4×2 channel due to its immediate applicability in the digital video broadcasting (DVB). Such channels optimally employ rate-two space-time (ST) codes consisting of (4×4) matrices. Unfortunately, such codes are in general very complex to decode, hence setting forth a call for constructions with reduced complexity. Recently, some reduced complexity constructions have been proposed, but they have mainly been based on different ad hoc methods and have resulted in isolated examples rather than in a more general class of codes. In this paper, it will be shown that a family of division algebra based MIDO codes will always result in at least 37.5 worst-case complexity reduction, while maintaining full diversity and, for the first time, the nonvanishing determinant (NVD) property. The reduction follows from the fact that, similarly to the Alamouti code, the codes will be subsets of matrix rings of the Hamiltonian quaternions, hence allowing simplified decoding. At the moment, such reductions are among the best known for rate-two MIDO codes [5], [6]. Several explicit constructions are presented and shown to have excellent performance through computer simulations.simulations.
The first reduced ML-complexity @math construction was given in @cite_16 , combining two copies of a quasi-orthogonal code @cite_24 . This resulted in a MIDO code that does have lower decoding complexity, but unfortunately does not have full rank. Nevertheless, good performance is still achieved at low-to-moderate SNRs and with four real dimensions less in the sphere decoder.
{ "cite_N": [ "@cite_24", "@cite_16" ], "mid": [ "2110659753", "2264873466" ], "abstract": [ "We introduce space-time block coding, a new paradigm for communication over Rayleigh fading channels using multiple transmit antennas. Data is encoded using a space-time block code and the encoded data is split into n streams which are simultaneously transmitted using n transmit antennas. The received signal at each receive antenna is a linear superposition of the n transmitted signals perturbed by noise. Maximum-likelihood decoding is achieved in a simple way through decoupling of the signals transmitted from different antennas rather than joint detection. This uses the orthogonal structure of the space-time block code and gives a maximum-likelihood decoding algorithm which is based only on linear processing at the receiver. Space-time block codes are designed to achieve the maximum diversity order for a given number of transmit and receive antennas subject to the constraint of having a simple decoding algorithm. The classical mathematical framework of orthogonal designs is applied to construct space-time block codes. It is shown that space-time block codes constructed in this way only exist for few sporadic values of n. Subsequently, a generalization of orthogonal designs is shown to provide space-time block codes for both real and complex constellations for any number of transmit antennas. These codes achieve the maximum possible transmission rate for any number of transmit antennas using any arbitrary real constellation such as PAM. For an arbitrary complex constellation such as PSK and QAM, space-time block codes are designed that achieve 1 2 of the maximum possible transmission rate for any number of transmit antennas. For the specific cases of two, three, and four transmit antennas, space-time block codes are designed that achieve, respectively, all, 3 4, and 3 4 of maximum possible transmission rate using arbitrary complex constellations. The best tradeoff between the decoding delay and the number of transmit antennas is also computed and it is shown that many of the codes presented here are optimal in this sense as well.", "We focus on full-rate, fast-decodable space-time block codes (STBCs) for 2 times 2 and 4times2 multiple-input multiple-output (MIMO) transmission. We first derive conditions and design criteria for reduced-complexity maximum-likelihood (ML) decodable 2times2 STBCs, and we apply them to two families of codes that were recently discovered. Next, we derive a novel reduced-complexity 4times2 STBC, and show that it outperforms all previously known codes with certain constellations." ] }
1010.5644
2140766031
Multiple-input double-output (MIDO) codes are important in the near-future wireless communications, where the portable end-user device is physically small and will typically contain at most two receive antennas. Especially tempting is the 4×2 channel due to its immediate applicability in the digital video broadcasting (DVB). Such channels optimally employ rate-two space-time (ST) codes consisting of (4×4) matrices. Unfortunately, such codes are in general very complex to decode, hence setting forth a call for constructions with reduced complexity. Recently, some reduced complexity constructions have been proposed, but they have mainly been based on different ad hoc methods and have resulted in isolated examples rather than in a more general class of codes. In this paper, it will be shown that a family of division algebra based MIDO codes will always result in at least 37.5 worst-case complexity reduction, while maintaining full diversity and, for the first time, the nonvanishing determinant (NVD) property. The reduction follows from the fact that, similarly to the Alamouti code, the codes will be subsets of matrix rings of the Hamiltonian quaternions, hence allowing simplified decoding. At the moment, such reductions are among the best known for rate-two MIDO codes [5], [6]. Several explicit constructions are presented and shown to have excellent performance through computer simulations.simulations.
The most recent results on fast-decodable codes have appeared in @cite_6 , where new constructions with optimized performance have been presented, and in @cite_17 @cite_30 @cite_4 , where fast-decodable codes with the NVD property have been built from crossed product and cyclic presentations of division algebras. In the preprint @cite_28 the authors consider quadratic forms as a tool for characterizing the decoding complexity, and in the preprint @cite_13 multi-group ML-decodable collocated and distributed space-time codes are proposed.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_28", "@cite_6", "@cite_13", "@cite_17" ], "mid": [ "2078752703", "2046822898", "51093891", "2051483799", "2949343062", "2137365644" ], "abstract": [ "Multiple-input double-output (MIDO) codes are important in future wireless communications, where the portable end-user device is physically small and uses only two receive antennas. In this paper, we address the design of 4×2 MIDO codes. Starting from a 4×4 space-time block code matrix built from a cyclic division algebra, two ways of puncturing the code are presented, resulting in either a well-shaped MIDO code, or a MIDO code with some orthonormal columns, yielding fast maximum-likelihood (ML) decodability. The well-shaped MIDO code outperforms the fast decodable one through simulations, an indication that the shaping property stays an important code design criterion. We then provide a slightly modified version of the well-shaped MIDO code which both preserves the shaping and increases the orthogonality of its columns in an attempt to speed up the decoding of the code. Finally, we show that a multiple-input single output (MISO) code is actually embedded in the MIDO code, allowing the transmitter to choose between sending a MIDO or MISO code, without having to change the encoder. All the proposed codes have the non-vanishing determinant (NVD) property.", "Multiple-input double-output (MIDO) codes are important in future wireless communications, where the portable end-user device is physically small and will typically contain maximum two receive antennas. Especially tempting is the 4×2 channel, where the four transmitters can either be all at one station, or separated between two different stations. Such channels optimally employ rate-two space-time (ST) codes consisting of 4×4 matrices. Unfortunately, such codes are in general very complex to decode, the worst-case complexity being as high as N8, where N is the size of the complex signaling alphabet. Hence, constructions with reduced complexity are called for. One option, of course, is to use the rate-one codes such as the quasi-orthogonal codes. However, if full multiplexing, i.e., transmission of two symbols per channel use is to be maintained, this option has to be put aside. Recently, some reduced complexity constructions have been proposed, but they have mainly been based on ad hoc methods and have resulted in a specific code instead of a more general class of codes. In this paper, it will be shown that cyclic division algebra (CDA) based codes satisfying certain criteria will always result in at least 25 worst-case complexity reduction, while maintaining full diversity and even the non-vanishing determinant (NVD) property. The reduction follows from the fact that the codes will consist of four Alamouti blocks allowing simplified decoding. At the moment, such reduction is the best known for rate-two MIDO codes [3], [15]. The code proposed in [10] was the first one to provably fulfill the related algebraic properties, and shall be repeated here as an example. Further, a new low-complexity design resulting from the proposed criteria is presented, and shown to have excellent performance through simulations.", "Abstract— A linear space-time block code (STBC) is a vector space spanned by its defining weight matrices over the real number field. We define a Quadratic Form (QF), called the Hurwitz-Radon QF (HRQF), on this vector space and give a QF interpretation of the ML decoding complexity of a STBC. It is shown that the ML decoding complexity is only a function of the weight matrices defining the code and their ordering, and not of the channel realization (even though the equivalent channel when sphere decoding is used depends on the channel realization) or the number of receive antennas. It is shown that the ML decoding complexity is completely captured into a single matrix obtained from the HRQF. Also, given a set of weight matrices, an algorithm to obtain the best ordering of them leading to the least ML decoding complexity is presented. The well known classes of low ML decoding complexity codes (multi-group decodable codes, fast decodable codes and fast group decodable codes) are presented in the framework of HRQF.", "This paper deals with low maximum-likelihood (ML)-decoding complexity, full-rate and full-diversity space-time block codes (STBCs), which also offer large coding gain, for the 2 transmit antenna, 2 receive antenna (2 × 2) and the 4 transmit antenna, 2 receive antenna (4 × 2) MIMO systems. Presently, the best known STBC for the 2 × 2 system is the Golden code and that for the 4 × 2 system is the DjABBA code. Following the approach by Biglieri, Hong, and Viterbo, a new STBC is presented in this paper for the 2 × 2 system. This code matches the Golden code in performance and ML-decoding complexity for square QAM constellations while it has lower ML-decoding complexity with the same performance for non-rectangular QAM constellations. This code is also shown to be information-lossless and diversity-multiplexing gain (DMG) tradeoff optimal. This design procedure is then extended to the 4 × 2 system and a code, which outperforms the DjABBA code for QAM constellations with lower ML-decoding complexity, is presented. So far, the Golden code has been reported to have an ML-decoding complexity of the order of M 4 for square QAM of size M. In this paper, a scheme that reduces its ML-decoding complexity to M 2?(M) is presented.", "In this paper, collocated and distributed space-time block codes (DSTBCs) which admit multi-group maximum likelihood (ML) decoding are studied. First the collocated case is considered and the problem of constructing space-time block codes (STBCs) which optimally tradeoff rate and ML decoding complexity is posed. Recently, sufficient conditions for multi-group ML decodability have been provided in the literature and codes meeting these sufficient conditions were called Clifford Unitary Weight (CUW) STBCs. An algebraic framework based on extended Clifford algebras is proposed to study CUW STBCs and using this framework, the optimal tradeoff between rate and ML decoding complexity of CUW STBCs is obtained for few specific cases. Code constructions meeting this tradeoff optimally are also provided. The paper then focuses on multi-group ML decodable DSTBCs for application in synchronous wireless relay networks and three constructions of four-group ML decodable DSTBCs are provided. Finally, the OFDM based Alamouti space-time coded scheme proposed by Li-Xia for a 2 relay asynchronous relay network is extended to a more general transmission scheme that can achieve full asynchronous cooperative diversity for arbitrary number of relays. It is then shown how differential encoding at the source can be combined with the proposed transmission scheme to arrive at a new transmission scheme that can achieve full cooperative diversity in asynchronous wireless relay networks with no channel information and also no timing error knowledge at the destination node. Four-group decodable DSTBCs applicable in the proposed OFDM based transmission scheme are also given.", "The goal of this paper is to design fast-decodable space-time codes for four transmit and two receive antennas. The previous attempts to build such codes have resulted in codes that are not full rank and hence cannot provide full diversity or high coding gains. Extensive work carried out on division algebras indicates that in order to get, not only non-zero but perhaps even non-vanishing determinants (NVD) one should look at division algebras and their orders. To further aid the decoding, we will build our codes so that they consist of four generalized Alamouti blocks which allows decoding with reduced complexity. As far as we know, the resulting codes are the first having both reduced decoding complexity, and at the same time allowing one to give a proof of the NVD property." ] }
1010.5938
2951470174
Takens' Embedding Theorem remarkably established that concatenating M previous outputs of a dynamical system into a vector (called a delay coordinate map) can be a one-to-one mapping of a low-dimensional attractor from the system state space. However, Takens' theorem is fragile in the sense that even small imperfections can induce arbitrarily large errors in this attractor representation. We extend Takens' result to establish deterministic, explicit and non-asymptotic sufficient conditions for a delay coordinate map to form a stable embedding in the restricted case of linear dynamical systems and observation functions. Our work is inspired by the field of Compressive Sensing (CS), where results guarantee that low-dimensional signal families can be robustly reconstructed if they are stably embedded by a measurement operator. However, in contrast to typical CS results, i) our sufficient conditions are independent of the size of the ambient state space, and ii) some system and measurement pairs have fundamental limits on the conditioning of the embedding (i.e., how close it is to an isometry), meaning that further measurements beyond some point add no further significant value. We use several simple simulations to explore the conditions of the main results, including the tightness of the bounds and the convergence speed of the stable embedding. We also present an example task of estimating the attractor dimension from time-series data to highlight the value of stable embeddings over traditional Takens' embeddings.
Given the system matrix @math and the definition of a dynamical system , knowing the state at some fixed time @math is equivalent to knowing the path that the system takes to and from that state (called the ). Classic results in linear systems theory @cite_19 show that the explicit solution for this path is given by a matrix multiplication: @math , where @math is the . Note that this solution is valid for positive or negative values of @math , describing the flow both forward and backward from time @math .
{ "cite_N": [ "@cite_19" ], "mid": [ "1973942504" ], "abstract": [ "1. Background and Preview. 2. Highlights of Classical Control Theory. 3. State Variables and the State Space Description of Dynamic Systems. 4. Fundamentals of Matrix Algebra. 5. Vectors and Linear Vector Spaces. 6. Simultaneous Linear Equations. 7. Eigenvalues and Eigenvectors. 8. Functions of Square Matrices and the Cayley-Hamilton Theorem. 9. Analysis of Continuous and Discrete Time State Equations. 10. Stability. 11. Controllability and Observability for Linear Systems. 12. The Relationship between State Variable and Transfer Function Descriptions of Systems. 13. Design of Linear Feedback Control Systems. 14. An Introduction to Optimal Control Theory. 15. An Introduction to Nonlinear Control Systems." ] }
1010.5938
2951470174
Takens' Embedding Theorem remarkably established that concatenating M previous outputs of a dynamical system into a vector (called a delay coordinate map) can be a one-to-one mapping of a low-dimensional attractor from the system state space. However, Takens' theorem is fragile in the sense that even small imperfections can induce arbitrarily large errors in this attractor representation. We extend Takens' result to establish deterministic, explicit and non-asymptotic sufficient conditions for a delay coordinate map to form a stable embedding in the restricted case of linear dynamical systems and observation functions. Our work is inspired by the field of Compressive Sensing (CS), where results guarantee that low-dimensional signal families can be robustly reconstructed if they are stably embedded by a measurement operator. However, in contrast to typical CS results, i) our sufficient conditions are independent of the size of the ambient state space, and ii) some system and measurement pairs have fundamental limits on the conditioning of the embedding (i.e., how close it is to an isometry), meaning that further measurements beyond some point add no further significant value. We use several simple simulations to explore the conditions of the main results, including the tightness of the bounds and the convergence speed of the stable embedding. We also present an example task of estimating the attractor dimension from time-series data to highlight the value of stable embeddings over traditional Takens' embeddings.
One of the principle benefits of a stable delay coordinate map would be resilience to noise and other imperfections. The effect of noise on the reconstruction of state space attractors has also been previously considered by several researchers apart from the notion of a stable embedding. In @cite_21 the authors looked at a modified embedding theorem for systems corrupted by dynamical noise, considering specifically embeddings using multivariate time series system outputs and taking more measurements than is typically required for a delay-coordinate map. In @cite_13 , the authors study the effects of observational noise via statistical methods, showing how the choice of delay-coordinates (i.e., the choice of observation function @math and sampling time @math with respect to the system dynamics) affects the ability to make predictions. In particular, they showed that poor reconstruction amplifies noise and increases estimation error.
{ "cite_N": [ "@cite_21", "@cite_13" ], "mid": [ "2022980104", "2158038039" ], "abstract": [ "We present a new embedding theorem for time series, in the spirit of Takens's theorem, but requiring multivariate signals. Our result is part of a growing body of work that extends the domain of geometric time series analysis to some genuinely stochastic systems-including such natural examples where φ is some fixed map and the ηi are i.i.d. random displacements", "Takens' theorem demonstrates that in the absence of noise a multidimensional state space can be reconstructed from a scalar time series. This theorem gives little guidance, however, about practical considerations for reconstructing a good state space. We extend Takens' treatment, applying statistical methods to incorporate the effects of observational noise and estimation error. We define the distortion matrix, which is proportional to the conditional covariance of a state, given a series of noisy measurements, and the noise amplification, which is proportional to root-mean-square time series prediction errors with an ideal model. We derive explicit formulae for these quantities, and we prove that in the low noise limit minimizing the distortion is equivalent to minimizing the noise amplification. We identify several different scaling regimes for distortion and noise amplification, and derive asymptotic scaling laws. When the dimension and Lyapunov exponents are sufficiently large these scaling laws show that, no matter how the state space is reconstructed, there is an explosion in the noise amplification- from a practical point of view determinism is lost, and the time series is effectively a random process. In the low noise, large data limit we show that the technique of local singular value decomposition is an optimal coordinate transformation, in the sense that it achieves the minimum distortion in a state space of the lowest possible dimension. However, in numerical experiments we find that estimation error complicates this issue. For local approximation methods, we analyze the effect of reconstruction on estimation error, derive a scaling law, and suggest an algorithm for reducing estimation errors." ] }
1010.4502
2951728753
We analyze the problem of packing squares in an online fashion: Given a semi-infinite strip of width 1 and an unknown sequence of squares of side length in [0,1] that arrive from above, one at a time. The objective is to pack these items as they arrive, minimizing the resulting height. Just like in the classical game of Tetris, each square must be moved along a collision-free path to its final destination. In addition, we account for gravity in both motion (squares must never move up) and position (any final destination must be supported from below). A similar problem has been considered before; the best previous result is by Azar and Epstein, who gave a 4-competitive algorithm in a setting without gravity (i.e., with the possibility of letting squares "hang in the air") based on ideas of shelf-packing: Squares are assigned to different horizontal levels, allowing an analysis that is reminiscent of some bin-packing arguments. We apply a geometric analysis to establish a competitive factor of 3.5 for the bottom-left heuristic and present a 34 13=2.615...-competitive algorithm.
strip packing problem!online Concerning the absolute competitive ratio Baker et al. @cite_13 present two algorithms with competitive ratio @math and @math . If the input sequence consists only of squares the competitive ratio reduces to @math for both algorithms. These algorithms are the first shelf algorithms: A shelf algorithm classifies the rectangles according to their height, i.e., a rectangle is in a class @math if its height is in the interval @math , for a parameter @math . Each class is packed in a separate shelf , i.e., into a rectangular area of width one and height @math , inside the strip. A bin packing algorithm is used as a subroutine to pack the items. Ye @cite_3 present an algorithm with absolute competitive factor @math . Lower bounds for the absolute performance ratio are @math for sequences of rectangles and @math for sequences of squares @cite_19 .
{ "cite_N": [ "@cite_19", "@cite_13", "@cite_3" ], "mid": [ "1972899329", "2081987145", "" ], "abstract": [ "Many problems, such as cutting stock problems and the scheduling of tasks with a shared resource, can be viewed as two-dimensional bin packing problems. Using the two-dimensional packing model of Baker, Coffman, and Rivest, a finite list L of rectangles is to be packed into a rectangular bin of finite width but infinite height, so as to minimize the total height used. An algorithm which packs the list in the order given without looking ahead or moving pieces already packed is called an on-line algorithm. Since the problem of finding an optimal packing is NP-hard, previous work has been directed at finding approximation algorithms. Most of the approximation algorithms which have been studied are on-line except that they require the list to have been previously sorted by height or width. This paper examines lower bounds for the worst-case performance of on-line algorithms for both non-preordered lists and for lists preordered by increasing or decreasing height or width.", "This paper studies two approximation algorithms for packing rectangles, using the two-dimensional packing model of Baker, Coffman and Rivest [SIAM J. Comput., 9 (1980), pp. 846–855]. The algorithms studied are called next-fit and first-fit shelf algorithms, respectively. They differ from previous algorithms by packing the rectangles in the order given; the previous algorithms required sorting the rectangles by decreasing height or width before packing them, which is not possible in some applications. The shelf algorithms are a modification of the next-fit and first-fit decreasing height level algorithms of Coffman, Garey, Johnson and Tarjan [SIAM J. Comput., 9 (1980), pp. 808–826]. Each shelf algorithm takes a parameter r. It is shown that by choosing r appropriately, the asymptotic worst case performance of the shelf algorithms can be made arbitrarily close to that of the next-fit and first-fit level algorithms, without the restriction that items must be packed in order of decreasing height. Nonasymptoti...", "" ] }
1010.4502
2951728753
We analyze the problem of packing squares in an online fashion: Given a semi-infinite strip of width 1 and an unknown sequence of squares of side length in [0,1] that arrive from above, one at a time. The objective is to pack these items as they arrive, minimizing the resulting height. Just like in the classical game of Tetris, each square must be moved along a collision-free path to its final destination. In addition, we account for gravity in both motion (squares must never move up) and position (any final destination must be supported from below). A similar problem has been considered before; the best previous result is by Azar and Epstein, who gave a 4-competitive algorithm in a setting without gravity (i.e., with the possibility of letting squares "hang in the air") based on ideas of shelf-packing: Squares are assigned to different horizontal levels, allowing an analysis that is reminiscent of some bin-packing arguments. We apply a geometric analysis to establish a competitive factor of 3.5 for the bottom-left heuristic and present a 34 13=2.615...-competitive algorithm.
Tetris Every reader is certainly familiar with the classical game of Tetris: Given a strip of fixed width, find an online placement for a sequence of objects falling down from above such that space is utilized as good as possible. In comparison to the strip packing problem, there is a slight difference in the objective function as Tetris aims at filling rows. In actual optimization scenarios this is less interesting as it is not critical whether a row is used to precisely 100 full rows do not magically disappear in real life. In this process, no item can ever move upward, no collisions between objects must occur, an item will come to a stop if and only if it is supported from below, and each placement has to be fixed before the next item arrives. Even when disregarding the difficulty of ever-increasing speed, Tetris is notoriously difficult: Breukelaar @cite_11 show that Tetris is -hard, even for the, original, limited set of different objects.
{ "cite_N": [ "@cite_11" ], "mid": [ "2146302034" ], "abstract": [ "In the popular computer game of Tetris, the player is given a sequence of tetromino pieces and must pack them into a rectangular gameboard initially occupied by a given configuration of filled squares; any completely filled row of the gameboard is cleared and all filled squares above it drop by one row. We prove that in the offline version of Tetris, it is -complete to maximize the number of cleared rows, maximize the number of tetrises (quadruples of rows simultaneously filled and cleared), minimize the maximum height of an occupied square, or maximize the number of pieces placed before the game ends. We furthermore show the extreme inapproximability of the first and last of these objectives to within a factor of p1-e, when given a sequence of p pieces, and the inapproximability of the third objective to within a factor of 2-e, for any e>0. Our results hold under several variations on the rules of Tetris, including different models of rotation, limitations on player agility, and restricted piece sets." ] }
1010.4502
2951728753
We analyze the problem of packing squares in an online fashion: Given a semi-infinite strip of width 1 and an unknown sequence of squares of side length in [0,1] that arrive from above, one at a time. The objective is to pack these items as they arrive, minimizing the resulting height. Just like in the classical game of Tetris, each square must be moved along a collision-free path to its final destination. In addition, we account for gravity in both motion (squares must never move up) and position (any final destination must be supported from below). A similar problem has been considered before; the best previous result is by Azar and Epstein, who gave a 4-competitive algorithm in a setting without gravity (i.e., with the possibility of letting squares "hang in the air") based on ideas of shelf-packing: Squares are assigned to different horizontal levels, allowing an analysis that is reminiscent of some bin-packing arguments. We apply a geometric analysis to establish a competitive factor of 3.5 for the bottom-left heuristic and present a 34 13=2.615...-competitive algorithm.
strip packing problem!Tetris constraint Tetris-like online packing has been considered before. Most notably, Azar and Epstein @cite_12 consider online packing of rectangles into a strip; just like in Tetris, they consider the situation with or without rotation of objects. For the case without rotation, they show that no constant competitive ratio is possible, unless there is a fixed-size lower bound of @math on the side length of the objects, in which case there is an upper bound of @math on the competitive ratio.
{ "cite_N": [ "@cite_12" ], "mid": [ "2052713537" ], "abstract": [ "The paper considerspacking of rectanglesinto an infinite bin. Similar to theTetris game, the rectangles arrive from the top and, once placed, cannot be moved again. The rectangles are moved inside the bin to reach their place. For the case in which rotations are allowed, we design an algorithm whose performance ratio is constant. In contrast, if rotations are not allowed, we show that no algorithm of constant ratio exists. For this case we design an algorithm with performance ratio ofO(log(1 ?)), where ? is the minimum width of any rectangle. We also show that no algorithm can achieve a better ratio than?(log(1 ?))for this case." ] }
1010.4502
2951728753
We analyze the problem of packing squares in an online fashion: Given a semi-infinite strip of width 1 and an unknown sequence of squares of side length in [0,1] that arrive from above, one at a time. The objective is to pack these items as they arrive, minimizing the resulting height. Just like in the classical game of Tetris, each square must be moved along a collision-free path to its final destination. In addition, we account for gravity in both motion (squares must never move up) and position (any final destination must be supported from below). A similar problem has been considered before; the best previous result is by Azar and Epstein, who gave a 4-competitive algorithm in a setting without gravity (i.e., with the possibility of letting squares "hang in the air") based on ideas of shelf-packing: Squares are assigned to different horizontal levels, allowing an analysis that is reminiscent of some bin-packing arguments. We apply a geometric analysis to establish a competitive factor of 3.5 for the bottom-left heuristic and present a 34 13=2.615...-competitive algorithm.
For the case in which rotation is possible, they present a 4-competitive strategy based on shelf-packing methods: Each rectangle is rotated such that its narrow side is the bottom side. The algorithm tries to maintain a corridor at the right side of the strip to move the rectangles to their shelves. If a shelf is full or the path to it is blocked, by a large item, a new shelf is opened. Until now, this is also the best deterministic upper bound for squares. Note that in this strategy gravity is not taken into account as items are allowed to be placed at appropriate levels. Coffman @cite_1 consider probabilistic aspects of online rectangle packing without rotation and with Tetris constraint. If @math rectangle side lengths are chosen uniformly at random from the interval @math , they show that there is a lower bound of @math on the expected height for any algorithm. Moreover, they propose an algorithm that achieves an asymptotic expected height of @math .
{ "cite_N": [ "@cite_1" ], "mid": [ "2012437618" ], "abstract": [ "Rectangles with dimensions independently chosen from a uniform distribution on [0, 1] are packed on-line into a unit width strip under a constraint like that of the Tetris game: rectangles arrive from the top and must be moved inside the strip to reach their place; once placed, they cannot be moved again. Cargo loading applications impose similar constraints. This paper assumes that rectangles must be moved without rotation. For n rectangles, the resulting packing height is shown to have an asymptotic expected value of at least (0.31382733 ... )n under any on-line packing algorithm. An on-line algorithm is presented that achieves an asymptotic expected height of (0.36976421 ... )n. This algorithm improves the bound achieved in Next Fit Level (NFL) packing, by compressing the items packed on two successive levels of an NFL packing via on-line movement admissible under the Tetris-like constraint." ] }
1010.4502
2951728753
We analyze the problem of packing squares in an online fashion: Given a semi-infinite strip of width 1 and an unknown sequence of squares of side length in [0,1] that arrive from above, one at a time. The objective is to pack these items as they arrive, minimizing the resulting height. Just like in the classical game of Tetris, each square must be moved along a collision-free path to its final destination. In addition, we account for gravity in both motion (squares must never move up) and position (any final destination must be supported from below). A similar problem has been considered before; the best previous result is by Azar and Epstein, who gave a 4-competitive algorithm in a setting without gravity (i.e., with the possibility of letting squares "hang in the air") based on ideas of shelf-packing: Squares are assigned to different horizontal levels, allowing an analysis that is reminiscent of some bin-packing arguments. We apply a geometric analysis to establish a competitive factor of 3.5 for the bottom-left heuristic and present a 34 13=2.615...-competitive algorithm.
strip packing problem!Tetris and gravity constraint There is one negative result for the setting with Tetris and gravity constraint when rotation is not allowed in @cite_12 : If all rectangles have a width of at least @math or of at most @math , then the competitive factor of any algorithms is @math .
{ "cite_N": [ "@cite_12" ], "mid": [ "2052713537" ], "abstract": [ "The paper considerspacking of rectanglesinto an infinite bin. Similar to theTetris game, the rectangles arrive from the top and, once placed, cannot be moved again. The rectangles are moved inside the bin to reach their place. For the case in which rotations are allowed, we design an algorithm whose performance ratio is constant. In contrast, if rotations are not allowed, we show that no algorithm of constant ratio exists. For this case we design an algorithm with performance ratio ofO(log(1 ?)), where ? is the minimum width of any rectangle. We also show that no algorithm can achieve a better ratio than?(log(1 ?))for this case." ] }