aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1104.2944
2949310786
In this paper, we study the question of how efficiently a collection of interconnected nodes can perform a global computation in the widely studied GOSSIP model of communication. In this model, nodes do not know the global topology of the network, and they may only initiate contact with a single neighbor in each round. This model contrasts with the much less restrictive LOCAL model, where a node may simultaneously communicate with all of its neighbors in a single round. A basic question in this setting is how many rounds of communication are required for the information dissemination problem, in which each node has some piece of information and is required to collect all others. In this paper, we give an algorithm that solves the information dissemination problem in at most @math rounds in a network of diameter @math , withno dependence on the conductance. This is at most an additive polylogarithmic factor from the trivial lower bound of @math , which applies even in the LOCAL model. In fact, we prove that something stronger is true: any algorithm that requires @math rounds in the LOCAL model can be simulated in @math rounds in the GOSSIP model. We thus prove that these two models of distributed computation are essentially equivalent.
The model of communication, where each node communicates with each of its neighbors in every round, was formalized by Peleg @cite_22 . Information spreading in this model requires a number of rounds which is equal to the diameter of the communication graph. Many other distributed tasks have been studied in this model, and below we mention a few in order to give a sense of the variety of problems studied. These include computing maximal independent sets @cite_5 , graph colorings @cite_27 , computing capacitated dominating sets @cite_24 , general covering and packing problems @cite_3 , and general techniques for distributed symmetry breaking @cite_26 .
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_3", "@cite_24", "@cite_27", "@cite_5" ], "mid": [ "2119761906", "1568961751", "1998137177", "2687257225", "2034499376", "1987125980" ], "abstract": [ "We introduce Multi-Trials, a new technique for symmetry breaking for distributed algorithms and apply it to various problems in general graphs. For instance, we present three randomized algorithms for distributed (vertex or edge) coloring improving on previous algorithms and showing a time color trade-off. To get a Δ+1 coloring takes time O(log Δ+ √ log n). To obtain an O(Δ+log1+1 log*nn) coloring takes time O(log* n). This is more than an exponential improvement in time for graphs of polylogarithmic degree. Our fastest algorithm works in constant time using O(Δlog(c) n+ log1+1 c n) colors, where c denotes an arbitrary constant and log(c ) n denotes the c times (recursively) applied logarithm ton. We also use the Multi-Trials technique to compute network decompositions and to compute maximal independent set (MIS), obtaining new results for several graph classes.", "", "Achieving a global goal based on local information is challenging, especially in complex and large-scale networks such as the Internet or even the human brain. In this paper, we provide an almost tight classification of the possible trade-off between the amount of local information and the quality of the global solution for general covering and packing problems. Specifically, we give a distributed algorithm using only small messages which obtains an (ρΔ)1 k-approximation for general covering and packing problems in time O(k2), where ρ depends on the LP's coefficients. If message size is unbounded, we present a second algorithm that achieves an O(n1 k) approximation in O(k) rounds. Finally, we prove that these algorithms are close to optimal by giving a lower bound on the approximability of packing problems given that each node has to base its decision on information from its k-neighborhood.", "We study local, distributed algorithms for the capacitated minimum dominating set (CapMDS) problem, which arises in various distributed network applications. Given a network graph G=(V,E), and a capacity cap(v)∈ℕ for each node v∈V, the CapMDS problem asks for a subset S⊆V of minimal cardinality, such that every network node not in S is covered by at least one neighbor in S, and every node v∈S covers at most cap(v) of its neighbors. We prove that in general graphs and even with uniform capacities, the problem is inherently non-local, i.e., every distributed algorithm achieving a non-trivial approximation ratio must have a time complexity that essentially grows linearly with the network diameter. On the other hand, if for some parameter e>0, capacities can be violated by a factor of 1+e, CapMDS becomes much more local. Particularly, based on a novel distributed randomized rounding technique, we present a distributed bi-criteria algorithm that achieves an O(log Δ)-approximation in time O(log 3 n+log (n) e), where n and Δ denote the number of nodes and the maximal degree in G, respectively. Finally, we prove that in geometric network graphs typically arising in wireless settings, the uniform problem can be approximated within a constant factor in logarithmic time, whereas the non-uniform problem remains entirely non-local.", "The distributed (Δ + 1)-coloring problem is one of most fundamental and well-studied problems in Distributed Algorithms. Starting with the work of Cole and Vishkin in 86, there was a long line of gradually improving algorithms published. The current state-of-the-art running time is O(Δ log Δ + log* n), due to Kuhn and Wattenhofer, PODC'06. Linial (FOCS'87) has proved a lower bound of 1 2 log* n for the problem, and Szegedy and Vishwanathan (STOC'93) provided a heuristic argument that shows that algorithms from a wide family of locally iterative algorithms are unlikely to achieve running time smaller than Θ(Δ log Δ). We present a deterministic (Δ + 1)-coloring distributed algorithm with running time O(Δ) + 1 2 log* n. We also present a tradeoff between the running time and the number of colors, and devise an O(Δ • t)-coloring algorithm with running time O(Δ t + log* n), for any parameter t, 1", "We study the distributed maximal independent set (henceforth, MIS) problem on sparse graphs. Currently, there are known algorithms with a sublogarithmic running time for this problem on oriented trees and graphs of bounded degrees. We devise the first sublogarithmic algorithm for computing MIS on graphs of bounded arboricity. This is a large family of graphs that includes graphs of bounded degree, planar graphs, graphs of bounded genus, graphs of bounded treewidth, graphs that exclude a fixed minor, and many other graphs. We also devise efficient algorithms for coloring graphs from these families. These results are achieved by the following technique that may be of independent interest. Our algorithm starts with computing a certain graph-theoretic structure, called Nash-Williams forests-decomposition. Then this structure is used to compute the MIS or coloring. Our results demonstrate that this methodology is very powerful. Finally, we show nearly-tight lower bounds on the running time of any distributed algorithm for computing a forests-decomposition." ] }
1104.2964
2952004720
We study social welfare in one-sided matching markets where the goal is to efficiently allocate n items to n agents that each have a complete, private preference list and a unit demand over the items. Our focus is on allocation mechanisms that do not involve any monetary payments. We consider two natural measures of social welfare: the ordinal welfare factor which measures the number of agents that are at least as happy as in some unknown, arbitrary benchmark allocation, and the linear welfare factor which assumes an agent's utility linearly decreases down his preference lists, and measures the total utility to that achieved by an optimal allocation. We analyze two matching mechanisms which have been extensively studied by economists. The first mechanism is the random serial dictatorship (RSD) where agents are ordered in accordance with a randomly chosen permutation, and are successively allocated their best choice among the unallocated items. The second mechanism is the probabilistic serial (PS) mechanism of Bogomolnaia and Moulin [8], which computes a fractional allocation that can be expressed as a convex combination of integral allocations. The welfare factor of a mechanism is the infimum over all instances. For RSD, we show that the ordinal welfare factor is asymptotically 1 2, while the linear welfare factor lies in the interval [.526, 2 3]. For PS, we show that the ordinal welfare factor is also 1 2 while the linear welfare factor is roughly 2 3. To our knowledge, these results are the first non-trivial performance guarantees for these natural mechanisms.
Finally, the study of mechanism design without money has been of recent interest in the computer science community @cite_30 @cite_11 . We already have mentioned the relation to popular matchings in the introduction. There has been works motivated by the market exchange problem @cite_0 @cite_26 , and item allocation problem @cite_21 @cite_27 , however none of them address the problem that we study.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_21", "@cite_0", "@cite_27", "@cite_11" ], "mid": [ "2108957189", "", "49938292", "2072933301", "1503617009", "" ], "abstract": [ "The literature on algorithmic mechanism design is mostly concerned with game-theoretic versions of optimization problems to which standard economic money-based mechanisms cannot be applied efficiently. Recent years have seen the design of various truthful approximation mechanisms that rely on payments. In this article, we advocate the reconsideration of highly structured optimization problems in the context of mechanism design. We explicitly argue for the first time that, in such domains, approximation can be leveraged to obtain truthfulness without resorting to payments. This stands in contrast to previous work where payments are almost ubiquitous and (more often than not) approximation is a necessary evil that is required to circumvent computational complexity. We present a case study in approximate mechanism design without money. In our basic setting, agents are located on the real line and the mechanism must select the location of a public facility; the cost of an agent is its distance to the facility. We establish tight upper and lower bounds for the approximation ratio given by strategyproof mechanisms without payments, with respect to both deterministic and randomized mechanisms, under two objective functions: the social cost and the maximum cost. We then extend our results in two natural directions: a domain where two facilities must be located and a domain where each agent controls multiple locations.", "", "We investigate the problem of allocating items (private goods) among competing agents in a setting that is both prior-free and payment-free. Specificall, we focus on allocating multiple heterogeneous items between two agents with additive valuation functions. Our objective is to design strategy-proof mechanisms that are competitive against the most efficien (first-best allocation. We introduce the family of linear increasing-price (LIP) mechanisms. The LIP mechanisms are strategy-proof, prior-free, and payment-free, and they are exactly the increasing-price mechanisms satisfying a strong responsiveness property. We show how to solve for competitive mechanisms within the LIP family. For the case of two items, we fin a LIP mechanism whose competitive ratio is near optimal (the achieved competitive ratio is 0.828, while any strategy-proof mechanism is at most 0.841-competitive). As the number of items goes to infinit, we prove a negative result that any increasing-price mechanism (linear or nonlinear) has a maximal competitive ratio of 0.5. Our results imply that in some cases, it is possible to design good allocation mechanisms without payments and without priors.", "Consider a matching problem on a graph where disjoint sets of vertices are privately owned by self-interested agents. An edge between a pair of vertices indicates compatibility and allows the vertices to match. We seek a mechanism to maximize the number of matches despite self-interest, with agents that each want to maximize the number of their own vertices that match. Each agent can choose to hide some of its vertices, and then privately match the hidden vertices with any of its own vertices that go unmatched by the mechanism. A prominent application of this model is to kidney exchange, where agents correspond to hospitals and vertices to donor-patient pairs. Here hospitals may game an exchange by holding back pairs and harm social welfare. In this paper we seek to design mechanisms that are strategyproof, in the sense that agents cannot benefit from hiding vertices, and approximately maximize efficiency, i.e., produce a matching that is close in cardinality to the maximum cardinality matching. Our main result is the design and analysis of the eponymous Mix-and-Match mechanism; we show that this randomized mechanism is strategyproof and provides a 2-approximation. Lower bounds establish that the mechanism is near optimal.", "We study the problem of allocating a single item repeatedly among multiple competing agents, in an environment where monetary transfers are not possible. We design (Bayes-Nash) incentive compatible mechanisms that do not rely on payments, with the goal of maximizing expected social welfare. We first focus on the case of two agents. We introduce an artificial payment system, which enables us to construct repeated allocation mechanisms without payments based on one-shot allocation mechanisms with payments. Under certain restrictions on the discount factor, we propose several repeated allocation mechanisms based on artificial payments. For the simple model in which the agents' valuations are either high or low, the mechanism we propose is 0.94-competitive against the optimal allocation mechanism with payments. For the general case of any prior distribution, the mechanism we propose is 0.85-competitive. We generalize the mechanism to cases of three or more agents. For any number of agents, the mechanism we obtain is at least 0.75-competitive. The obtained competitive ratios imply that for repeated allocation, artificial payments may be used to replace real monetary payments, without incurring too much loss in social welfare.", "" ] }
1104.1377
2949195414
For input @math , let @math denote the set of outputs that are the "legal" answers for a computational problem @math . Suppose @math and members of @math are so large that there is not time to read them in their entirety. We propose a model of local computation algorithms which for a given input @math , support queries by a user to values of specified locations @math in a legal output @math . When more than one legal output @math exists for a given @math , the local computation algorithm should output in a way that is consistent with at least one such @math . Local computation algorithms are intended to distill the common features of several concepts that have appeared in various algorithmic subfields, including local distributed computation, local algorithms, locally decodable codes, and local reconstruction. We develop a technique, based on known constructions of small sample spaces of @math -wise independent random variables and Beck's analysis in his algorithmic approach to the Lov ' a sz Local Lemma, which under certain conditions can be applied to construct local computation algorithms that run in polylogarithmic time and space. We apply this technique to maximal independent set computations, scheduling radio network broadcasts, hypergraph coloring and satisfying @math -SAT formulas.
Recently, local algorithms have been demonstrated to be applicable for computations on the web graph. In @cite_20 @cite_41 @cite_24 @cite_38 @cite_34 , local algorithms are given which, for a given vertex @math in the web graph, computes an approximation to @math 's personalized PageRank vector and computes the vertices that contribute significantly to @math 's PageRank. In these algorithms, evaluations are made only to the nearby neighborhood of @math , so that the running time depends on the accuracy parameters input to the algorithm, but there is no running time dependence on the size of the web-graph. Local graph partitioning algorithms have been presented in @cite_31 @cite_38 which find subsets of vertices whose internal connections are significantly richer than their external connections. The running time of these algorithms depends on the size of the cluster that is output, which can be much smaller than the size of the entire graph.
{ "cite_N": [ "@cite_38", "@cite_41", "@cite_24", "@cite_31", "@cite_34", "@cite_20" ], "mid": [ "2086254934", "2039191721", "2033252376", "2045107949", "2106891910", "2069153192" ], "abstract": [ "A local graph partitioning algorithm finds a cut near a specified starting vertex, with a running time that depends largely on the size of the small side of the cut, rather than the size of the input graph. In this paper, we present a local partitioning algorithm using a variation of PageRank with a specified starting distribution. We derive a mixing result for PageRank vectors similar to that for random walks, and show that the ordering of the vertices produced by a PageRank vector reveals a cut with small conductance. In particular, we show that for any set C with conductance and volume k, a PageRank vector with a certain starting distribution can be used to produce a set with conductance O ( k ). We present an improved algorithm for computing approximate PageRank vectors, which allows us to find such a set in time proportional to its size. In particular, we can find a cut with conductance at most o , whose small side has volume at least 2b, in time O ( 2^b ^2 m o^2 ) where m is the number of edges in the graph. By combining small sets found by this local partitioning algorithm, we obtain a cut with conductance o and approximately optimal balance in time O ( m ^4 m o^2 ).", "We introduce a novel bookmark-coloring algorithm (BCA) that computes authority weights over the web pages utilizing the web hyperlink structure. The computed vector (BCV) is similar to the PageRank vector defined for a page-specific teleportation. Meanwhile, BCA is very fast, and BCV is sparse. BCA also has important algebraic properties. If several BCVs corresponding to a set of pages (called hub) are known, they can be leveraged in computing arbitrary BCV via a straightforward algebraic process and hub BCVs can be efficiently computed and encoded.", "Personalized PageRank expresses link-based page quality around user selected pages. The only previous personalized PageRank algorithm that can serve on-line queries for an unrestricted choice of pages on large graphs is our Monte Carlo algorithm [WAW 2004]. In this paper we achieve unrestricted personalization by combining rounding and randomized sketching techniques in the dynamic programming algorithm of Jeh and Widom [WWW 2003]. We evaluate the precision of approximation experimentally on large scale real-world data and find significant improvement over previous results. As a key theoretical contribution we show that our algorithms use an optimal amount of space by also improving earlier asymptotic worst-case lower bounds. Our lower bounds and algorithms apply to the SimRank as well; of independent interest is the reduction of the SimRank computation to personalized PageRank.", "We present algorithms for solving symmetric, diagonally-dominant linear systems to accuracy e in time linear in their number of non-zeros and log (κ f (A) e), where κ f (A) is the condition number of the matrix defining the linear system. Our algorithm applies the preconditioned Chebyshev iteration with preconditioners designed using nearly-linear time algorithms for graph sparsification and graph partitioning.", "Motivated by the problem of detecting link-spam, we consider the following graph-theoretic primitive: Given a webgraph G, a vertex v in G, and a parameter δ ∈ (0, 1), compute the set of all vertices that contribute to v at least a δ fraction of v's PageRank. We call this set the δ-contributing set of v. To this end, we define the contribution vector of v to be the vector whose entries measure the contributions of every vertex to the PageRank of v. A local algorithm is one that produces a solution by adaptively examining only a small portion of the input graph near a specified vertex. We give an efficient local algorithm that computes an Ɛ-approximation of the contribution vector for a given vertex by adaptively examining O(1 Ɛ) vertices. Using this algorithm, we give a local approximation algorithm for the primitive defined above. Specifically, we give an algorithm that returns a set containing the δ-contributing set of v and at most O(1 δ) vertices from the δ 2-contributing set of v, and which does so by examining at most O(1 δ) vertices. We also give a local algorithm for solving the following problem: If there exist k vertices that contribute a ρ-fraction to the PageRank of v, find a set of k vertices that contribute at least a (ρ-Ɛ)-fraction to the PageRank of v. In this case, we prove that our algorithm examines at most O(k Ɛ) vertices.", "Recent web search techniques augment traditional text matching with a global notion of \"importance\" based on the linkage structure of the web, such as in Google's PageRank algorithm. For more refined searches, this global notion of importance can be specialized to create personalized views of importance--for example, importance scores can be biased according to a user-specified set of initially-interesting pages. Computing and storing all possible personalized views in advance is impractical, as is computing personalized views at query time, since the computation of each view requires an iterative computation over the web graph. We present new graph-theoretical results, and a new technique based on these results, that encode personalized views as partial vectors. Partial vectors are shared across multiple personalized views, and their computation and storage costs scale well with the number of views. Our approach enables incremental computation, so that the construction of personalized views from partial vectors is practical at query time. We present efficient dynamic programming algorithms for computing partial vectors, an algorithm for constructing personalized views from partial vectors, and experimental results demonstrating the effectiveness and scalability of our techniques." ] }
1104.1377
2949195414
For input @math , let @math denote the set of outputs that are the "legal" answers for a computational problem @math . Suppose @math and members of @math are so large that there is not time to read them in their entirety. We propose a model of local computation algorithms which for a given input @math , support queries by a user to values of specified locations @math in a legal output @math . When more than one legal output @math exists for a given @math , the local computation algorithm should output in a way that is consistent with at least one such @math . Local computation algorithms are intended to distill the common features of several concepts that have appeared in various algorithmic subfields, including local distributed computation, local algorithms, locally decodable codes, and local reconstruction. We develop a technique, based on known constructions of small sample spaces of @math -wise independent random variables and Beck's analysis in his algorithmic approach to the Lov ' a sz Local Lemma, which under certain conditions can be applied to construct local computation algorithms that run in polylogarithmic time and space. We apply this technique to maximal independent set computations, scheduling radio network broadcasts, hypergraph coloring and satisfying @math -SAT formulas.
Though most of the previous examples are for sparse graphs or other problems which have some sort of sparsity, local computation algorithms have also been provided for problems on dense graphs. The property testing algorithms of @cite_36 use a small sample of the vertices (a type of a core-set) to define a good graph coloring or partition of a dense graph. This approach yields local computation algorithms for finding a large partition of the graph and a coloring of the vertices which has relatively few edge violations.
{ "cite_N": [ "@cite_36" ], "mid": [ "1970630090" ], "abstract": [ "In this paper, we consider the question of determining whether a function f has property P or is e-far from any function with property P. A property testing algorithm is given a sample of the value of f on instances drawn according to some distribution. In some cases, it is also allowed to query f on instances of its choice. We study this question for different properties and establish some connections to problems in learning theory and approximation. In particular, we focus our attention on testing graph properties. Given access to a graph G in the form of being able to query whether an edge exists or not between a pair of vertices, we devise algorithms to test whether the underlying graph has properties such as being bipartite, k -Colorable, or having a p -Clique (clique of density p with respect to the vertex set). Our graph property testing algorithms are probabilistic and make assertions that are correct with high probability, while making a number of queries that is independent of the size of the graph. Moreover, the property testing algorithms can be used to efficiently (i.e., in time linear in the number of vertices) construct partitions of the graph that correspond to the property being tested, if it holds for the input graph." ] }
1104.0319
1763086816
Much of the past work in network analysis has focused on analyzing discrete graphs, where binary edges represent the "presence" or "absence" of a relationship. Since traditional network measures (e.g., betweenness centrality) utilize a discrete link structure, complex systems must be transformed to this representation in order to investigate network properties. However, in many domains there may be uncertainty about the relationship structure and any uncertainty information would be lost in translation to a discrete representation. Uncertainty may arise in domains where there is moderating link information that cannot be easily observed, i.e., links become inactive over time but may not be dropped or observed links may not always corresponds to a valid relationship. In order to represent and reason with these types of uncertainty, we move beyond the discrete graph framework and develop social network measures based on a probabilistic graph representation. More specifically, we develop measures of path length, betweenness centrality, and clustering coefficient---one set based on sampling and one based on probabilistic paths. We evaluate our methods on three real-world networks from Enron, Facebook, and DBLP, showing that our proposed methods more accurately capture salient effects without being susceptible to local noise, and that the resulting analysis produces a better understanding of the graph structure and the uncertainty resulting from its change over time.
The notion of probabilistic graphs have been studied previously, notably by @cite_11 , @cite_17 and @cite_7 . @cite_11 showed how for graphs with probability distributions over the weights for each edge, Monte Carlo methods can be used to sample to determine the shortest path probabilities between the edges. @cite_17 then extends this to find the shortest weighted paths most likely to complete within a certain time constraint (e.g., the shortest distance across town in under half an hour). In @cite_7 , the most probable shortest paths are used to estimate the @math -nearest neighbors in the graph for a particular node. Although @cite_7 draws sample graphs based on likelihood (i.e., sampling each edge according to its probability), in their estimate of the shortest path distribution they weight each sample graph based on its probability, which is incorrect unless the samples are drawn uniformly at random from the distribution. In this work, we sample in the same manner as @cite_7 , but weight each sample uniformly in our expectation calculations---since, when the graphs are drawn from the distribution based on their likelihood, the graphs with higher likelihood are more likely to be sampled.
{ "cite_N": [ "@cite_17", "@cite_7", "@cite_11" ], "mid": [ "", "1154484500", "2062568266" ], "abstract": [ "", "Large probabilistic graphs arise in various domains spanning from social networks to biological and communication networks. An important query in these graphs is the k nearestneighbor query, which involves finding and reporting the k closest nodes to a specific node. This query assumes the existence of a measure of the “proximity” or the “distance” between any two nodes in the graph. To that end, we propose various novel distance functions that extend well known notions of classical graph theory, such as shortest paths and random walks. We argue that many meaningful distance functions are computationally intractable to compute exactly. Thus, in order to process nearest-neighbor queries, we resort to Monte Carlo sampling and exploit novel graph-transformation ideas and pruning opportunities. In our extensive experimental analysis, we explore the trade-offs of our approximation algorithms and demonstrate that they scale well on real-world probabilistic graphs with tens of millions of edges.", "This paper considers the problem of finding shortest-path probability distributions in graphs whose branches are weighted with random lengths, examines the consequences of various assumptions concerning the nature of the available statistical information, and gives an exact method for computing the probability distribution, as well as methods based on hypothesis testing and statistical estimation. It presents Monte Carlo results and, based on these results, it develops an efficient method of hypothesis testing. Finally, it discusses briefly the pairwise comparison of paths." ] }
1104.0457
2054575221
This paper investigates control laws allowing mobile, autonomous agents to optimally position themselves on the line for distributed sensing in a nonuniform field. We show that a simple static control law, based only on local measurements of the field by each agent, drives the agents close to the optimal positions after the agents execute in parallel a number of sensing movement computation rounds that is essentially quadratic in the number of agents. Further, we exhibit a dynamic control law which, under slightly stronger assumptions on the capabilities and knowledge of each agent, drives the agents close to the optimal positions after the agents execute in parallel a number of sensing communication computation movement rounds that is essentially linear in the number of agents. Crucially, both algorithms are fully distributed and robust to unpredictable loss and addition of agents.
In @cite_0 , uniform coverage algorithms are derived using Voronoi cells and gradient laws for distributed dynamical systems. Uniform constrained coverage control is addressed in @cite_39 where the constraint is a minimum limit on node degree. Virtual potentials enable repulsion between agents to maximize coverage and attraction between agents to enforce the constraint. In @cite_10 , gradient control laws are proposed to move sensors to a configuration that maximizes expected event detection frequency. Local rules are enforced by defining a sensing radius for each agent, which also makes computations simpler. The approach is demonstrated for a nonuniform but symmetric density field with and without communication constraints. Further results for distributed coverage control are presented in @cite_37 for a coverage metric defined in terms of the Euclidean metric with a weighting factor that allows for nonuniformity. As in @cite_37 , the methodology makes use of Voronoi cells and Lloyd descent algorithms. The papers @cite_15 @cite_25 identified a class of non-convex regions for which the coverage problem may be solved by reduction to the convex case through a well-chosen transformation of the region. The papers @cite_36 @cite_16 explored an optimization-based approach to some complex variations of the covering problem.
{ "cite_N": [ "@cite_37", "@cite_36", "@cite_39", "@cite_0", "@cite_15", "@cite_16", "@cite_10", "@cite_25" ], "mid": [ "2167485994", "1984648987", "2153239294", "2007403280", "2042398697", "2156561078", "2135792961", "2084690910" ], "abstract": [ "This paper describes decentralized control laws for the coordination of multiple vehicles performing spatially distributed tasks. The control laws are based on a gradient descent scheme applied to a class of decentralized utility functions that encode optimal coverage and sensing policies. These utility functions are studied in geographical optimization problems and they arise naturally in vector quantization and in sensor allocation tasks. The approach exploits the computational geometry of spatial structures such as Voronoi diagrams.", "In this paper, we consider a class of dynamic vehicle routing problems, in which a number of mobile agents in the plane must visit target points generated over time by a stochastic process. It is desired to design motion coordination strategies in order to minimize the expected time between the appearance of a target point and the time it is visited by one of the agents. We propose control strategies that, while making minimal or no assumptions on communications between agents, provide the same level of steady-state performance achieved by the best-known decentralized strategies. In other words, we demonstrate that inter-agent communication does not improve the efficiency of such systems, but merely affects the rate of convergence to the steady state. Furthermore, the proposed strategies do not rely on the knowledge of the details of the underlying stochastic process. Copyright © 2007 John Wiley & Sons, Ltd.", "We consider the problem of self-deployment of a mobile sensor network. We are interested in a deployment strategy that maximizes the area coverage of the network with the constraint that each of the nodes has at least K neighbors, where K is a user-specified parameter. We propose an algorithm based on artificial potential fields which is distributed, scalable and does not require a prior map of the environment. Simulations establish that the resulting networks have the required degree with a high probability, are well connected and achieve good coverage. We present analytical results for the coverage achievable by uniform random and symmetrically tiled network configurations and use these to evaluate the performance of our algorithm.", "This paper discusses dynamical systems for disk-covering and sphere-packing problems. We present facility location functions from geometric optimization and characterize their differentiable properties. We design and analyze a collection of distributed control laws that are related to nonsmooth gradient systems. The resulting dynamical systems promise to be of use in coordination problems for networked robots; in this setting the distributed control laws correspond to local interactions between the robots. The technical approach relies on concepts from computational geometry, nonsmooth analysis, and the dynamical system approach to algorithms.", "We present a framework for extending coverage algorithms from convex to nonconvex domains. We are particularly interested in coverage in the presence of obstacles, that is domains that contain holes. We identify a class of connected regions in Ropf2 than can be mapped through a diffeomorphism to an almost convex region in Ropf2 - a convex region from which a finite (possibly empty) set of points has been subtracted. The transformation allows us to solve the coverage problem using the approach of , and obtain the solution of the original problem through inverse transformation. We provide a formal analysis of the approach and demonstrate its effectiveness through simulations. We conclude the paper with a discussion on possible extensions of the work that we are currently undertaking.", "In this paper we address the problem of dynamic coverage control of a convex polygonal region in the plane using N agents with bounded velocities and finite instantaneous area of coverage. The proposed coverage algorithm guarantees finite- time search of the region, does not depend on gradient-based methods, and can be carried out by the agents in a distributed fashion. We provide an upper bound on the completion time as well as the number of messages that need to be exchanged by the agents. A simulation is provided to illustrate the algorithm.", "We present a distributed coverage control scheme for cooperating mobile sensor networks. The mission space is modeled using a density function representing the frequency of random events taking place, with mobile sensors operating over a limited range defined by a probabilistic model. A gradient-based algorithm is designed requiring local information at each sensor and maximizing the joint detection probabilities of random events. We also incorporate communication costs into the coverage control problem, viewing the sensor network as a multi-source, single-basestation data collection network. Communication cost is modeled as the power consumption needed to deliver collected data from sensor nodes, thus trading off sensing coverage and communication cost. The control Scheme is tested in a simulation environment to illustrate its adaptive, distributed, and asynchronous properties.", "The paper describes a framework for solving the coverage problem for a class of non-convex domains. In we have shown how a diffeomorphism can be used to transform a non-convex coverage problem to a convex one to which the Lloyd?s algorithm can be applied. In this paper we show how a diffeomorphism can be constructed for convex regions with obstacles in its interior, so that the solution of the transformed problem yields the solution of the original non-convex problem. As part of this investigation we also identify stationary points of the Lloyd?s algorithm in non-convex domains. We provide the formal analysis of the approach and demonstrate its effectiveness through simulations." ] }
1104.0457
2054575221
This paper investigates control laws allowing mobile, autonomous agents to optimally position themselves on the line for distributed sensing in a nonuniform field. We show that a simple static control law, based only on local measurements of the field by each agent, drives the agents close to the optimal positions after the agents execute in parallel a number of sensing movement computation rounds that is essentially quadratic in the number of agents. Further, we exhibit a dynamic control law which, under slightly stronger assumptions on the capabilities and knowledge of each agent, drives the agents close to the optimal positions after the agents execute in parallel a number of sensing communication computation movement rounds that is essentially linear in the number of agents. Crucially, both algorithms are fully distributed and robust to unpredictable loss and addition of agents.
The paper @cite_30 considered the general nonuniform coverage problem with a non-Euclidean distance, and it proposed and proved the correctness of a coverage control law in the plane. However, the control law of @cite_30 is only partially distributed, in that it relies on a cartogram computation'' step which requires some global knowledge of the domain.
{ "cite_N": [ "@cite_30" ], "mid": [ "2132399935" ], "abstract": [ "In this paper, we investigate nonuniform coverage of a planar region by a network of autonomous, mobile agents. We derive centralized nonuniform coverage control laws from uniform coverage algorithms using cartograms, transformations that map nonuniform metrics to a near Euclidean metric. We also investigate time-varying coverage metrics and the design of control algorithms to cover regions with slowly varying, nonuniform metrics. Our results are applicable to the design of mobile sensor networks, notably when the coverage metric varies as data is collected such as in the case of an information metric. The results apply also to the study of animal groups foraging for food that is nonuniformly distributed and possibly changing." ] }
1104.0888
1525037223
Determining the feasibility conditions for vector space interference alignment in the K-user MIMO interference channel with constant channel coefficients has attracted much recent attention yet remains unsolved. The main result of this paper is restricted to the symmetric square case where all transmitters and receivers have N antennas, and each user desires d transmit dimensions. We prove that alignment is possible if and only if the number of antennas satisfies N>= d(K+1) 2. We also show a necessary condition for feasibility of alignment with arbitrary system parameters. An algebraic geometry approach is central to the results.
The problem we consider, of maximizing dof using vector space strategies for the @math -user MIMO IC with finite number of transmit and receive antennas, has received significant attention in the last several years. Cadambe and Jafar @cite_2 considered the problem for @math users and @math antennas, and showed that @math dof was achievable. For more than @math users or @math they assumed an infinite number of parallel channels and applied their main @math result. @cite_10 posed the problem of determining feasibility of alignment, but left the problem unanswered and proposed a heuristic iterative numerical algorithm.
{ "cite_N": [ "@cite_10", "@cite_2" ], "mid": [ "2167357515", "1979408141" ], "abstract": [ "Recent results establish the optimality of interference alignment to approach the Shannon capacity of interference networks at high SNR. However, the extent to which interference can be aligned over a finite number of signalling dimensions remains unknown. Another important concern for interference alignment schemes is the requirement of global channel knowledge. In this work we provide examples of iterative algorithms that utilize the reciprocity of wireless networks to achieve interference alignment with only local channel knowledge at each node. These algorithms also provide numerical insights into the feasibility of interference alignment that are not yet available in theory.", "For the fully connected K user wireless interference channel where the channel coefficients are time-varying and are drawn from a continuous distribution, the sum capacity is characterized as C(SNR)=K 2log(SNR)+o(log(SNR)) . Thus, the K user time-varying interference channel almost surely has K 2 degrees of freedom. Achievability is based on the idea of interference alignment. Examples are also provided of fully connected K user interference channels with constant (not time-varying) coefficients where the capacity is exactly achieved by interference alignment at all SNR values." ] }
1104.0888
1525037223
Determining the feasibility conditions for vector space interference alignment in the K-user MIMO interference channel with constant channel coefficients has attracted much recent attention yet remains unsolved. The main result of this paper is restricted to the symmetric square case where all transmitters and receivers have N antennas, and each user desires d transmit dimensions. We prove that alignment is possible if and only if the number of antennas satisfies N>= d(K+1) 2. We also show a necessary condition for feasibility of alignment with arbitrary system parameters. An algebraic geometry approach is central to the results.
The main theoretical work to precede the present paper is by @cite_14 . Considering the case of a single transmit dimension, @math , they apply Bernstein's Theorem, which requires that each coefficient in a system of polynomial equations is chosen generically. They note that Bernstein's Theorem no longer applies in the case @math , as the equations describing the problem become coupled and coefficients are repeated. Our approach bypasses the difficulties posed by coupled equations, and thus, unlike @cite_14 our results do not have the restriction that @math .
{ "cite_N": [ "@cite_14" ], "mid": [ "2168095411" ], "abstract": [ "We explore the feasibility of interference alignment in signal vector space-based only on beamforming-for K-user MIMO interference channels. Our main contribution is to relate the feasibility issue to the problem of determining the solvability of a multivariate polynomial system which is considered extensively in algebraic geometry. It is well known, e.g., from Bezout's theorem, that generic polynomial systems are solvable if and only if the number of equations does not exceed the number of variables. Following this intuition, we classify signal space interference alignment problems as either proper or improper based on the number of equations and variables. Rigorous connections between feasible and proper systems are made through Bernshtein's theorem for the case where each transmitter uses only one beamforming vector. The multibeam case introduces dependencies among the coefficients of a polynomial system so that the system is no longer generic in the sense required by both theorems. In this case, we show that the connection between feasible and proper systems can be further strengthened (since the equivalency between feasible and proper systems does not always hold) by including standard information theoretic outer bounds in the feasibility analysis." ] }
1104.0888
1525037223
Determining the feasibility conditions for vector space interference alignment in the K-user MIMO interference channel with constant channel coefficients has attracted much recent attention yet remains unsolved. The main result of this paper is restricted to the symmetric square case where all transmitters and receivers have N antennas, and each user desires d transmit dimensions. We prove that alignment is possible if and only if the number of antennas satisfies N>= d(K+1) 2. We also show a necessary condition for feasibility of alignment with arbitrary system parameters. An algebraic geometry approach is central to the results.
Almost all other work has focused on various heuristic algorithms, mainly iterative in nature (see @cite_17 , @cite_12 , and @cite_11 ). Some have proofs of convergence, but performance guarantees are not available. @cite_11 , @cite_5 study a refined version of the single-transmit dimension problem, where for the case that alignment is possible (as mentioned above, feasibility of alignment is known for the single-transmit case @math ) they attempt to choose a good solution among the many possible solutions. Papailiopoulos and Dimakis @cite_3 relax the problem of maximizing degrees of freedom to that of a constrained rank minimization and propose an iterative algorithm.
{ "cite_N": [ "@cite_17", "@cite_3", "@cite_5", "@cite_12", "@cite_11" ], "mid": [ "2139960897", "2151381597", "2136266535", "2028676993", "2164275514" ], "abstract": [ "Using interference alignment, it has been shown that the number of degrees of freedom in the interference channel scales linearly with the number of users. Unfortunately, closed-form solutions for interference alignment over constant-coefficient channels with more than 3 users are difficult to derive. This paper proposes an algorithm for interference alignment in the MIMO interference channel with an arbitrary number of users, antennas, or spatial streams. The algorithm is an alternating minimization over the precoding matrices at the transmitters and the interference subspaces at the receivers, and is proven to converge. Numerical results show how the algorithm is useful for simulation and can give insight into the limitations of interference alignment.", "We show that the maximization of the sum degrees-of-freedom for the static flat-fading multiple-input multiple-output (MIMO) interference channel is equivalent to a rank constrained rank minimization problem (RCRM), when the signal spaces span all available dimensions. The rank minimization corresponds to maximizing interference alignment (IA) so that interference spans the lowest dimensional subspace possible. The rank constraints account for the useful signal spaces spanning all available spatial dimensions. That way, we reformulate all IA requirements to requirements involving ranks. Then, we present a convex relaxation of the RCRM problem inspired by recent results in compressed sensing and low-rank matrix completion theory that rely on approximating rank with the nuclear norm. We show that the convex envelope of the sum of ranks of the interference matrices is the normalized sum of their corresponding nuclear norms and introduce tractable constraints that are asymptotically equivalent to the rank constraints for the initial problem. We also show that our heuristic relaxation can be tuned for the multi-cell interference channel. Furthermore, we experimentally show that in many cases the proposed algorithm attains perfect interference alignment and in some cases outperforms previous approaches for finding precoding and zero-forcing matrices for interference alignment.", "We consider the joint optimization of beamformers and linear receivers in a MIMO interference network. Each transmitter transmits a single beam corresponding to a rank-one precoder. When the number of users K is greater than the number of antennas at each terminal N, the maximum degrees of freedom is achieved via spatial interference alignment. Interference alignment is feasible for up to K = 2N−1 users, in which case there is a finite number of solutions to the alignment conditions. This number of solutions increases rapidly with N, and the solutions depend only on the cross-channel coefficients (i. e., they are independent of the direct channels). To maximize the achievable sum rate at high SNRs we therefore wish to select an aligned solution which is best matched to the direct channels. We evaluate the performance of this scheme for large K and N, assuming that the solution is the best out of a random subset of aligned solutions. We then compare numerically this performance with the performance of previously proposed numerical (e. g., forward-backward) techniques for optimizing beams, and a new technique which tracks the local optimum as the SNR is incrementally increased, similar to a homotopy method for improving convergence properties. We observe that the incremental technique typically achieves better performance than the previously proposed methods.", "Consider a MIMO interference channel whereby each transmitter and receiver are equipped with multiple antennas. The basic problem is to design optimal linear transceivers (or beamformers) that can maximize system throughput. The recent work [13] suggests that optimal beamformers should maximize the total degrees of freedom and achieve interference alignment in high SNR. In this paper we first consider the interference alignment problem in spatial domain and prove that the problem of maximizing the total degrees of freedom for a given MIMO interference channel is NP-hard. Furthermore, we show that even checking the achievability of a given tuple of degrees of freedom for all receivers is NP-hard when each receiver is equipped with at least three antennas. Moreover, in case where each transmitter and receiver use at most two antennas, the same problem is polynomial time solvable. Finally, we propose a distributed algorithm for transmit covariance matrix design, while assuming each receiver uses a linear MMSE beamformer. The simulation results show that the proposed algorithm outperforms the existing interference alignment algorithms in terms of system throughput.", "Alternating minimization algorithms are typically used to find interference alignment (IA) solutions for multiple- input multiple-output (MIMO) interference channels with more than K =3 users. For these scenarios many IA solutions exit, and the initial point determines which one is obtained upon convergence. In this paper, we propose a new iterative algorithm that aims at finding the IA solution that maximizes the average sum-rate. At each step of the alternating minimization algorithm, either the precoders or the decoders are moved along the direction given by the gradient of the sum-rate. Since IA solutions are defined by a set of subspaces, the gradient optimization is performed on the Grassmann manifold. The step size of the gradient ascent algorithm is annealed to zero over the iterations in such a way that during the last iterations only the interference leakage is being minimized and a perfect alignment solution is finally reached. Simulation examples are provided showing that the proposed algorithm obtains IA solutions with significant higher throughputs than the conventional IA algorithms." ] }
1104.0888
1525037223
Determining the feasibility conditions for vector space interference alignment in the K-user MIMO interference channel with constant channel coefficients has attracted much recent attention yet remains unsolved. The main result of this paper is restricted to the symmetric square case where all transmitters and receivers have N antennas, and each user desires d transmit dimensions. We prove that alignment is possible if and only if the number of antennas satisfies N>= d(K+1) 2. We also show a necessary condition for feasibility of alignment with arbitrary system parameters. An algebraic geometry approach is central to the results.
In a different direction of inquiry, @cite_12 show that checking the feasibility of alignment for general system parameters is NP-hard. Note that their result is not in contradiction to ours, since our simple closed-form expression applies only to the fully symmetric case.
{ "cite_N": [ "@cite_12" ], "mid": [ "2028676993" ], "abstract": [ "Consider a MIMO interference channel whereby each transmitter and receiver are equipped with multiple antennas. The basic problem is to design optimal linear transceivers (or beamformers) that can maximize system throughput. The recent work [13] suggests that optimal beamformers should maximize the total degrees of freedom and achieve interference alignment in high SNR. In this paper we first consider the interference alignment problem in spatial domain and prove that the problem of maximizing the total degrees of freedom for a given MIMO interference channel is NP-hard. Furthermore, we show that even checking the achievability of a given tuple of degrees of freedom for all receivers is NP-hard when each receiver is equipped with at least three antennas. Moreover, in case where each transmitter and receiver use at most two antennas, the same problem is polynomial time solvable. Finally, we propose a distributed algorithm for transmit covariance matrix design, while assuming each receiver uses a linear MMSE beamformer. The simulation results show that the proposed algorithm outperforms the existing interference alignment algorithms in terms of system throughput." ] }
1104.0888
1525037223
Determining the feasibility conditions for vector space interference alignment in the K-user MIMO interference channel with constant channel coefficients has attracted much recent attention yet remains unsolved. The main result of this paper is restricted to the symmetric square case where all transmitters and receivers have N antennas, and each user desires d transmit dimensions. We prove that alignment is possible if and only if the number of antennas satisfies N>= d(K+1) 2. We also show a necessary condition for feasibility of alignment with arbitrary system parameters. An algebraic geometry approach is central to the results.
We emphasize that in this paper we restrict attention to vector space interference alignment, where the effect of finite channel diversity can be observed. Interfering signals can also be aligned on the signal scale using lattice codes (first proposed in @cite_8 , see also @cite_6 , @cite_15 , @cite_16 ), however the understanding of this type of alignment is currently at the stage corresponding to infinite parallel channels in the vector space setting. In other words, essentially perfect" alignment is possible due to the infinite channel precision available at infinite signal-to-noise ratios.
{ "cite_N": [ "@cite_15", "@cite_16", "@cite_6", "@cite_8" ], "mid": [ "2100842176", "2130172876", "2168957483", "2151027523" ], "abstract": [ "The degrees-of-freedom of a K-user Gaussian interference channel (GIC) has been defined to be the multiple of (1 2)log 2 P at which the maximum sum of achievable rates grows with increasing power P. In this paper, we establish that the degrees-of-freedom of three or more user, real, scalar GICs, viewed as a function of the channel coefficients, is discontinuous at points where all of the coefficients are nonzero rational numbers. More specifically, for all K > 2, we find a class of K-user GICs that is dense in the GIC parameter space for which K 2 degrees-of-freedom are exactly achievable, and we show that the degrees-of-freedom for any GIC with nonzero rational coefficients is strictly smaller than K 2. These results are proved using new connections with number theory and additive combinatorics.", "In this paper, we develop the machinery of real interference alignment. This machinery is extremely powerful in achieving the sum degrees of freedom (DoF) of single antenna systems. The scheme of real interference alignment is based on designing single-layer and multilayer constellations used for modulating information messages at the transmitters. We show that constellations can be aligned in a similar fashion as that of vectors in multiple antenna systems and space can be broken up into fractional dimensions. The performance analysis of the signaling scheme makes use of a recent result in the field of Diophantine approximation, which states that the convergence part of the Khintchine-Groshev theorem holds for points on nondegenerate manifolds. Using real interference alignment, we obtain the sum DoF of two model channels, namely the Gaussian interference channel (IC) and the X channel. It is proved that the sum DoF of the K-user IC is (K 2) for almost all channel parameters. We also prove that the sum DoF of the X-channel with K transmitters and M receivers is (K M K + M - 1) for almost all channel parameters.", "An interference alignment example is constructed for the deterministic channel model of the K-user interference channel. The deterministic channel example is then translated into the Gaussian setting, creating the first known example of a fully connected Gaussian K-user interference network with single antenna nodes, real, nonzero and constant channel coefficients, and no propagation delays where the degrees of freedom outerbound is achieved. An analogy is drawn between the propagation delay based interference alignment examples and the deterministic channel model which also allows similar constructions for the two-user X channel as well.", "Recently, Etkin, Tse, and Wang found the capacity region of the two-user Gaussian interference channel to within 1 bit s Hz. A natural goal is to apply this approach to the Gaussian interference channel with an arbitrary number of users. We make progress towards this goal by finding the capacity region of the many-to-one and one-to-many Gaussian interference channels to within a constant number of bits. The result makes use of a deterministic model to provide insight into the Gaussian channel. The deterministic model makes explicit the dimension of signal level. A central theme emerges: the use of lattice codes for alignment of interfering signals on the signal level." ] }
1103.5609
2952122241
The notion of recoverable value was advocated in work of Feige, Immorlica, Mirrokni and Nazerzadeh [Approx 2009] as a measure of quality for approximation algorithms. There this concept was applied to facility location problems. In the current work we apply a similar framework to the maximum independent set problem (MIS). We say that an approximation algorithm has recoverable value @math , if for every graph it recovers an independent set of size at least @math , where @math is the degree of vertex @math , and @math ranges over all independent sets in @math . Hence, in a sense, from every vertex @math in the maximum independent set the algorithm recovers a value of at least @math towards the solution. This quality measure is most effective in graphs in which the maximum independent set is composed of low degree vertices. It easily follows from known results that some simple algorithms for MIS ensure @math . We design a new randomized algorithm for MIS that ensures an expected recoverable value of at least @math . In addition, we show that approximating MIS in graphs with a given @math -coloring within a ratio larger than @math is unique games hard. This rules out a natural approach for obtaining @math .
Greedy. For MIS, iteratively picking a minimum degree vertex, adding it to an independent set @math and deleting the vertex and its neighbors from the graph is guaranteed to find an independent set of size at least @math @cite_3 @cite_8 . Halldorsson and Radhakrishnan @cite_4 showed that this greedy algorithm produces an independent set of size at least @math (where @math denotes the fraction of vertices in the maximum independent set). For MWIS, weighted greedy that iteratively picks a vertex @math with minimum @math is guaranteed to find an independent set of size at least @math @cite_5 .
{ "cite_N": [ "@cite_5", "@cite_4", "@cite_3", "@cite_8" ], "mid": [ "2008690976", "2003665833", "1976890066", "135031060" ], "abstract": [ "In this paper, we consider three simple and natural greedy algorithms for the maximum weighted independent set problem. We show that two of them output an independent set of weight at least Σv∈V(G) W(v) [d(v) + 1] and the third algorithm outputs an independent set of weight at least Σv∈V(G) W(v)2 [Σu∈NG+(v) W(u)].These results are generalization of theorem of Caro and Wei.", "Theminimum-degree greedy algorithm, or Greedy for short, is a simple and well-studied method for finding independent sets in graphs. We show that it achieves a performance ratio of (Δ+2) 3 for approximating independent sets in graphs with degree bounded by Δ. The analysis yields a precise characterization of the size of the independent sets found by the algorithm as a function of the independence number, as well as a generalization of Turan's bound. We also analyze the algorithm when run in combination with a known preprocessing technique, and obtain an improved @math performance ratio on graphs with average degree @math , improving on the previous best @math of Hochbaum. Finally, we present an efficient parallel and distributed algorithm attaining the performance guarantees of Greedy.", "Wei discovered that the independence number of a graph G is at least Σv(1 + d(v))−1. It is proved here that if G is a connected triangle-free graph on n ≥ 3 vertices and if G is neither an odd cycle nor an odd path, then the bound above can be increased by nΔ(Δ + 1), where Δ is the maximum degree. This new bound is sharp for even cycles and for three other graphs. These results relate nicely to some algorithms for finding large independent sets. They also have a natural matrix theory interpretation. A survey of other known lower bounds on the independence number is presented.", "" ] }
1103.5609
2952122241
The notion of recoverable value was advocated in work of Feige, Immorlica, Mirrokni and Nazerzadeh [Approx 2009] as a measure of quality for approximation algorithms. There this concept was applied to facility location problems. In the current work we apply a similar framework to the maximum independent set problem (MIS). We say that an approximation algorithm has recoverable value @math , if for every graph it recovers an independent set of size at least @math , where @math is the degree of vertex @math , and @math ranges over all independent sets in @math . Hence, in a sense, from every vertex @math in the maximum independent set the algorithm recovers a value of at least @math towards the solution. This quality measure is most effective in graphs in which the maximum independent set is composed of low degree vertices. It easily follows from known results that some simple algorithms for MIS ensure @math . We design a new randomized algorithm for MIS that ensures an expected recoverable value of at least @math . In addition, we show that approximating MIS in graphs with a given @math -coloring within a ratio larger than @math is unique games hard. This rules out a natural approach for obtaining @math .
Consider the LP relaxation of this program where each @math . A well known result due to Nemhauser and Trotter @cite_14 asserts that there is an optimal solution for the relaxation such that for every @math . Moreover, such an optimal solution can be found in polynomial time.
{ "cite_N": [ "@cite_14" ], "mid": [ "2013415302" ], "abstract": [ "We consider a binary integer programming formulation (VP) for the weighted vertex packing problem in a simple graph. A sufficient “local” optimality condition for (VP) is given and this result is used to derive relations between (VP) and the linear program (VLP) obtained by deleting the integrality restrictions in (VP). Our most striking result is that those variables which assume binary values in an optimum (VLP) solution retain the same values in an optimum (VP) solution. This result is of interest because variables are (0, 1 2, 1). valued in basic feasible solutions to (VLP) and (VLP) can be solved by a “good” algorithm. This relationship and other optimality conditions are incorporated into an implicit enumeration algorithm for solving (VP). Some computational experience is reported." ] }
1103.5609
2952122241
The notion of recoverable value was advocated in work of Feige, Immorlica, Mirrokni and Nazerzadeh [Approx 2009] as a measure of quality for approximation algorithms. There this concept was applied to facility location problems. In the current work we apply a similar framework to the maximum independent set problem (MIS). We say that an approximation algorithm has recoverable value @math , if for every graph it recovers an independent set of size at least @math , where @math is the degree of vertex @math , and @math ranges over all independent sets in @math . Hence, in a sense, from every vertex @math in the maximum independent set the algorithm recovers a value of at least @math towards the solution. This quality measure is most effective in graphs in which the maximum independent set is composed of low degree vertices. It easily follows from known results that some simple algorithms for MIS ensure @math . We design a new randomized algorithm for MIS that ensures an expected recoverable value of at least @math . In addition, we show that approximating MIS in graphs with a given @math -coloring within a ratio larger than @math is unique games hard. This rules out a natural approach for obtaining @math .
LP+greedy. Consider the following algorithm. Find an optimal half integral solution to the LP, discard all the vertices assigned 0, keep all the vertices assigned 1, and run the greedy algorithm on the graph induced by all vertices that are assigned @math . This algorithm was analyzed for connected graphs. Hochbaum @cite_2 proved an approximation ratio of @math , and Halldorsson and Radhakrishnan @cite_4 (based on their improved analysis of the greedy algorithm) proved an approximation ratio of @math .
{ "cite_N": [ "@cite_4", "@cite_2" ], "mid": [ "2003665833", "2077783414" ], "abstract": [ "Theminimum-degree greedy algorithm, or Greedy for short, is a simple and well-studied method for finding independent sets in graphs. We show that it achieves a performance ratio of (Δ+2) 3 for approximating independent sets in graphs with degree bounded by Δ. The analysis yields a precise characterization of the size of the independent sets found by the algorithm as a function of the independence number, as well as a generalization of Turan's bound. We also analyze the algorithm when run in combination with a known preprocessing technique, and obtain an improved @math performance ratio on graphs with average degree @math , improving on the previous best @math of Hochbaum. Finally, we present an efficient parallel and distributed algorithm attaining the performance guarantees of Greedy.", "Abstract In this paper we describe a collection of efficient algorithms that deliver approximate solution to the weighted stable set, vertex cover and set packing problems. All algorithms guarantee bounds on the ratio of the heuristic solution to the optimal solution." ] }
1103.5609
2952122241
The notion of recoverable value was advocated in work of Feige, Immorlica, Mirrokni and Nazerzadeh [Approx 2009] as a measure of quality for approximation algorithms. There this concept was applied to facility location problems. In the current work we apply a similar framework to the maximum independent set problem (MIS). We say that an approximation algorithm has recoverable value @math , if for every graph it recovers an independent set of size at least @math , where @math is the degree of vertex @math , and @math ranges over all independent sets in @math . Hence, in a sense, from every vertex @math in the maximum independent set the algorithm recovers a value of at least @math towards the solution. This quality measure is most effective in graphs in which the maximum independent set is composed of low degree vertices. It easily follows from known results that some simple algorithms for MIS ensure @math . We design a new randomized algorithm for MIS that ensures an expected recoverable value of at least @math . In addition, we show that approximating MIS in graphs with a given @math -coloring within a ratio larger than @math is unique games hard. This rules out a natural approach for obtaining @math .
In terms of hardness results, Austrin, Khot and Safra @cite_0 proved that approximating independent set in graphs of maximum degree @math within a ratio larger than @math is unique games hard. Recall that we present hardness results for finding independent sets in graphs where a @math -coloring is given. They essentially match the @math bounds achieved by known approximation algorithms @cite_2 . We are not aware of previous published hardness results for this problem, but there are some results for related problems on hypergraphs @cite_13 , and hardness results for MIS in graphs with bounded chromatic number but when no coloring is given @cite_16 .
{ "cite_N": [ "@cite_0", "@cite_16", "@cite_13", "@cite_2" ], "mid": [ "2166369652", "", "1655051759", "2077783414" ], "abstract": [ "We study the inapproximability of Vertex Cover and Independent Set on degree @math graphs. We prove that: Vertex Cover is Unique Games-hard to approximate to within a factor @math . This exactly matches the algorithmic result of Halperin halperin02improved up to the @math term. Independent Set is Unique Games-hard to approximate to within a factor @math . This improves the @math Unique Games hardness result of Samorodnitsky and Trevisan samorodnitsky06gowers . Additionally, our result does not rely on the construction of a query efficient PCP as in samorodnitsky06gowers .", "", "Computing a minimum vertex cover in graphs and hypergraphs is a well-studied optimizaton problem. While intractable in general, it is well known that on bipartite graphs, vertex cover is polynomial time solvable. In this work, we study the natural extension of bipartite vertex cover to hypergraphs, namely finding a small vertex cover in k- uniform k-partite hypergraphs, when the k-partition is given as input. For this problem Lovasz [16] gave a k 2 factor LP rounding based approximation, and a matching (k 2 - o(1)) integrality gap instance was constructed by [1]. We prove the following results, which are the first strong hardness results for this problem (heree > 0 is an arbitrary constant): - NP-hardness of approximating within a factor of (k 4 -e), and - Unique Games-hardness of approximating within a factor of (k 2 -e), showing optimality of Lovasz's algorithm under the Unique Games conjecture. The NP-hardness result is based on a reduction from minimum vertex cover in r-uniform hypergraphs for which NP-hardness of approximating within r-1-e was shown by [5]. The Unique Games-hardness result is obtained by applying the recent results of [15], with a slight modification, to the LP integrality gap due to [1]. The modification is to ensure that the reduction preserves the desired structural properties of the hypergraph.", "Abstract In this paper we describe a collection of efficient algorithms that deliver approximate solution to the weighted stable set, vertex cover and set packing problems. All algorithms guarantee bounds on the ratio of the heuristic solution to the optimal solution." ] }
1103.5736
1618842139
Finite state automata (FSA) are ubiquitous in computer science. Two of the most important algorithms for FSA processing are the conversion of a non-deterministic finite automaton (NFA) to a deterministic finite automaton (DFA), and then the production of the unique minimal DFA for the original NFA. We exhibit a parallel disk-based algorithm that uses a cluster of 29 commodity computers to produce an intermediate DFA with almost two billion states and then continues by producing the corresponding unique minimal DFA with less than 800,000 states. The largest previous such computation in the literature was carried out on a 512-processor CM-5 supercomputer in 1996. That computation produced an intermediate DFA with 525,000 states and an unreported number of states for the corresponding minimal DFA. The work is used to provide strong experimental evidence satisfying a conjecture on a series of token passing networks. The conjecture concerns stack sortable permutations for a finite stack and a 3-buffer. The origins of this problem lie in the work on restricted permutations begun by Knuth and Tarjan in the late 1960s. The parallel disk-based computation is also compared with both a single-threaded and multi-threaded RAM-based implementation using a 16-core 128 GB large shared memory computer.
Finite state machines are also an important tool in natural language processing, and have been used for a wide variety of problems in computational linguistics. In a work presenting new applications of finite state automata to natural language processing @cite_32 , Mohri cites a number of examples, including: lexical analysis @cite_29 ; morphology and phonology @cite_31 ; syntax @cite_24 @cite_28 ; text-to-speech synthesis @cite_6 ; and speech recognition @cite_18 @cite_25 . Speech recognition, in particular, can benefit from the use of very large automata. @cite_16 , Mohri predicted:
{ "cite_N": [ "@cite_18", "@cite_28", "@cite_29", "@cite_32", "@cite_6", "@cite_24", "@cite_31", "@cite_16", "@cite_25" ], "mid": [ "", "", "2051846017", "2007624857", "", "1794686686", "1971927919", "2125529971", "2612304886" ], "abstract": [ "", "", "INTEX is a text processor; it is usually used to parse corpora of several megabytes. It includes several built-in large coverage dictionaries and grammars represented by graphs; the user may add his her own dictionaries and grammars. These tools are applied to texts in order to locate lexical and syntactic patterns, remove ambiguities, and tag words. INTEX builds concordances and indexes of all types of patterns; it is used by linguists to analyse corpora, but can also be viewed as an information retrieval system.", "We describe new applications of the theory of automata to natural language processing: the representation of very large scale dictionaries and the indexation of natural language texts. They are based on new algorithms that we introduce and describe in detail. In particular, we give pseudocodes for the determinisation of string to string transducers, the deterministic union of p-subsequential string to string transducers, and the indexation by automata. We report on several experiments illustrating the applications.", "", "Local grammars can be represented in a very convenient way by automata. This paper describes and illustrates an efficient algorithm for the application of local grammars put in this form to lemmatized texts.", "A source of potential systematic errors in information retrieval is identified and discussed. These errors occur when base form reduction is applied with a (necessarily) finite dictionary. Formal methods for avoiding this error source are presented, along with some practical complexities met in its implementation.", "Finite-machines have been used in various domains of natural language processing. We consider here the use of a type of transducer that supports very efficient programs: sequential transducers. We recall classical theorems and give new ones characterizing sequential string-to-string transducers. Transducers that outpur weights also play an important role in language and speech processing. We give a specific study of string-to-weight transducers, including algorithms for determinizing and minizizing these transducers very efficiently, and characterizations of the transducers admitting determinization and the corresponding algorithms. Some applications of these algorithms in speech recognition are described and illustrated.", "" ] }
1103.5736
1618842139
Finite state automata (FSA) are ubiquitous in computer science. Two of the most important algorithms for FSA processing are the conversion of a non-deterministic finite automaton (NFA) to a deterministic finite automaton (DFA), and then the production of the unique minimal DFA for the original NFA. We exhibit a parallel disk-based algorithm that uses a cluster of 29 commodity computers to produce an intermediate DFA with almost two billion states and then continues by producing the corresponding unique minimal DFA with less than 800,000 states. The largest previous such computation in the literature was carried out on a 512-processor CM-5 supercomputer in 1996. That computation produced an intermediate DFA with 525,000 states and an unreported number of states for the corresponding minimal DFA. The work is used to provide strong experimental evidence satisfying a conjecture on a series of token passing networks. The conjecture concerns stack sortable permutations for a finite stack and a 3-buffer. The origins of this problem lie in the work on restricted permutations begun by Knuth and Tarjan in the late 1960s. The parallel disk-based computation is also compared with both a single-threaded and multi-threaded RAM-based implementation using a 16-core 128 GB large shared memory computer.
Parallel DFA minimization has been considered since the 1990s. All existing parallel algorithms are for shared memory machines, either using the CRCW PRAM model @cite_11 , the CREW pram model @cite_1 , or the EREW PRAM model @cite_26 . All of these algorithms are applicable for tightly coupled parallel machines with shared RAM and they make heavy use of random access to shared memory. In addition, @cite_26 minimized a 525,000-state DFA on the CM-5 supercomputer.
{ "cite_N": [ "@cite_26", "@cite_1", "@cite_11" ], "mid": [ "2109227159", "2050046996", "2141216597" ], "abstract": [ "We present a parallel algorithm for the minimization of deterministic finite state automata (DFAs) and discuss its implementation on a connection machine CM-5 using data parallel and message passing models. We show that its time complexity on a p processor EREW PRAM (p spl les n) for inputs of size n is O([n log sup 2 n p]+log n log p) uniformly on almost all instances. The work done by our algorithm is thus within a factor of O(log n) of the best known sequential algorithm. The space used by our algorithm is linear in the input size. The actual resource requirements of our implementations are consistent with these estimates. Although parallel algorithms have been proposed for this problem in the past, they are not practical. We discuss the implementation details and the experimental results.", "", "In this paper, we have considered the state minimization problem for Deterministic Finite Automata (DFA). An efficient parallel algorithm for solving the problem on an arbitrary CRCW PRAM has been proposed. For n number of states and k number of inputs in ? of the DFA to be minimized, the algorithm runs in O(kn log n) time and uses O(n log n) processors." ] }
1103.5736
1618842139
Finite state automata (FSA) are ubiquitous in computer science. Two of the most important algorithms for FSA processing are the conversion of a non-deterministic finite automaton (NFA) to a deterministic finite automaton (DFA), and then the production of the unique minimal DFA for the original NFA. We exhibit a parallel disk-based algorithm that uses a cluster of 29 commodity computers to produce an intermediate DFA with almost two billion states and then continues by producing the corresponding unique minimal DFA with less than 800,000 states. The largest previous such computation in the literature was carried out on a 512-processor CM-5 supercomputer in 1996. That computation produced an intermediate DFA with 525,000 states and an unreported number of states for the corresponding minimal DFA. The work is used to provide strong experimental evidence satisfying a conjecture on a series of token passing networks. The conjecture concerns stack sortable permutations for a finite stack and a 3-buffer. The origins of this problem lie in the work on restricted permutations begun by Knuth and Tarjan in the late 1960s. The parallel disk-based computation is also compared with both a single-threaded and multi-threaded RAM-based implementation using a 16-core 128 GB large shared memory computer.
Obtaining a minimal canonical DFA equivalent to a given NFA is important for the analysis of the classes of permutations generated by token passing in graphs. Such a graph is called a token passing network (TPN) @cite_9 @cite_3 . This is related to the subject permutations @cite_15 , with origins in the 1969 work of Knuth [Section 2.2.1] Knuth:1969 and the 1972 work of Tarjan @cite_23 . TPNs are used to model or approximate a range of data structures, including combinations of stacks, and provide tools for analyzing the classes of permutations that can be sorted or generated using them. Stack sorting problems have been the subject of extensive research @cite_35 . Sorting with two ordered stacks in series is detailed in @cite_36 . Permutation classes defined by TPNs are described in @cite_33 . Very recent work focused on permutations generated by stacks and dequeues @cite_21 . A collection of results on permutation problems expressed as token passing networks is in @cite_0 .
{ "cite_N": [ "@cite_35", "@cite_33", "@cite_36", "@cite_9", "@cite_21", "@cite_3", "@cite_0", "@cite_23", "@cite_15" ], "mid": [ "2102928439", "2099673747", "2068458438", "", "2077632363", "2103793661", "", "2017788080", "" ], "abstract": [ "We review the various ways that stacks, their variations and their combinations, have been used as sorting devices. In particular, we show that they have been a key motivator for the study of permutation patterns. We also show that they have connections to other areas in combinatorics such as Young tableau, planar graph theory, and simplicial complexes.", "The study of pattern classes is the study of the involvement order on finite permutations. This order can be traced back to the work of Knuth. In recent years the area has attracted the attention of many combinatoralists and there have been many structural and enumerative developments. We consider permutations classes defined in three different ways and demonstrate that asking the same fixed questions in each case motivates a different view of involvement. Token passing networks encourage us to consider permutations as sequences of integers; grid classes encourage us to consider them as point sets; picture classes, which are developed for the first time in this thesis, encourage a purely geometrical approach. As we journey through each area we present several new results. We begin by studying the basic definitions of a permutation. This is followed by a discussion of the questions one would wish to ask of permutation classes. We concentrate on four particular areas: partial well order, finite basis, atomicity and enumeration. Our third chapter asks these questions of token passing networks; we also develop the concept of completeness and show that it is decidable whether or not a particular network is complete. Next we move onto grid classes, our analysis using generic sets yields an algorithm for determining when a grid class is atomic; we also present a new and elegant proof which demonstrates that certain grid classes are partially well ordered. The final chapter comprises the development and analysis of picture classes. We completely classify and enumerate those permutations which can be drawn from a circle, those which can be drawn from an X and those which", "The permutations that can be sorted by two stacks in series are considered, subject to the condition that each stack remains ordered. A forbidden characterisation of such permutations is obtained and the number of permutations of each length is determined by a generating function.", "", "Lower and upper bounds are given for the the number of permutations of length n generated by two stacks in series, two stacks in parallel, and a general deque.", "Abstract A transportation graph is a directed graph with a designated input node and a designated output node. Initially, the input node contains an ordered set of tokens 1,2,3, … The tokens are removed from the input node in this order and transferred through the graph to the output node in a series of moves; each move transfers a token from a node to an adjacent node. Two or more tokens cannot reside on an internal node simultaneously. When the tokens arrive at the output node they will appear in a permutation of their original order. The main result is a description of the possible arrival permutations in terms of regular sets. This description allows the number of arrival permutations of each length to be computed. The theory is then applied to packet-switching networks and has implications for the resequencing problem. It is also applied to some complex data structures and extends previously known results to the case that the data structures are of bounded capacity. A by-product of this investigation is a new proof that permutations which avoid the pattern 321 are in one to one correspondence with those that avoid 312.", "", "", "" ] }
1103.4503
2952457341
Discrepancy measures how uniformly distributed a point set is with respect to a given set of ranges. There are two notions of discrepancy, namely continuous discrepancy and combinatorial discrepancy. Depending on the ranges, several possible variants arise, for example star discrepancy, box discrepancy, and discrepancy of half-spaces. In this paper, we investigate the hardness of these problems with respect to the dimension d of the underlying space. All these problems are solvable in time n^O(d) , but such a time dependency quickly becomes intractable for high-dimensional data. Thus it is interesting to ask whether the dependency on d can be moderated. We answer this question negatively by proving that the canonical decision problems are W[1]-hard with respect to the dimension. This is done via a parameterized reduction from the Clique problem. As the parameter stays linear in the input parameter, the results moreover imply that these problems require n^ (d) time, unless 3-Sat can be solved in 2^o(n) time. Further, we derive that testing whether a given set is an -net with respect to half-spaces takes n^ (d) time under the same assumption. As intermediate results, we discover the W[1]-hardness of other well known problems, such as determining the largest empty star inside the unit cube. For this, we show that it is even hard to approximate within a factor of 2^n .
When the dimension is part of the input, the problem was shown to be @math -hard by @cite_21 ; in the same paper an @math -time algorithm was given. @cite_8 gave an algorithm that runs in @math time, where @math is the number of feasible boxes that are not properly contained in any feasible box, and showed that @math can be @math in the worst case. @cite_17 gave an @math -approximation algorithm that runs in @math time.
{ "cite_N": [ "@cite_21", "@cite_17", "@cite_8" ], "mid": [ "1541384941", "2059541050", "2151289047" ], "abstract": [ "Given two finite sets of points X+ and X− in Rn, the maximum box problem consists of finding an interval (“box”) B e lx : l ≤ x ≤ ur such that B ∩ X− e ∅, and the cardinality of B ∩ X+ is maximized. A simple generalization can be obtained by instead maximizing a weighted sum of the elements of B ∩ X+. While polynomial for any fixed n, the maximum box problem is NP -hard in general. We construct an efficient branch-and-bound algorithm for this problem and apply it to a standard problem in data analysis. We test this method on nine data sets, seven of which are drawn from the UCI standard machine learning repository.", "We study the question of finding a deepest point in an arrangement of regions and provide a fast algorithm for this problem using random sampling, showing it sufficient to solve this problem when the deepest point is shallow. This implies, among other results, a fast algorithm for approximately solving linear programming problems with violations. We also use this technique to approximate the disk covering the largest number of red points, while avoiding all the blue points, given two such sets in the plane. Using similar techniques implies that approximate range counting queries have roughly the same time and space complexity as emptiness range queries.", "Given a set of blue points and a set of red points in ddimensional space, we show how to find an axis-aligned hyperrectangle that contains no red points and as many blue points as possible. Our algorithm enumerates the set of relevant hyperrectangles (inclusion maximal axisaligned hyperrectangles that do not contain a red point) and counts the number of blue points in each one. The runtime of our algorithm depends on the total number of relevant hyperrectangles. We prove asymptotically tight bounds on this quantity in the worst case. The techniques developed directly apply to the maximum empty rectangle problem in high dimensions." ] }
1103.4503
2952457341
Discrepancy measures how uniformly distributed a point set is with respect to a given set of ranges. There are two notions of discrepancy, namely continuous discrepancy and combinatorial discrepancy. Depending on the ranges, several possible variants arise, for example star discrepancy, box discrepancy, and discrepancy of half-spaces. In this paper, we investigate the hardness of these problems with respect to the dimension d of the underlying space. All these problems are solvable in time n^O(d) , but such a time dependency quickly becomes intractable for high-dimensional data. Thus it is interesting to ask whether the dependency on d can be moderated. We answer this question negatively by proving that the canonical decision problems are W[1]-hard with respect to the dimension. This is done via a parameterized reduction from the Clique problem. As the parameter stays linear in the input parameter, the results moreover imply that these problems require n^ (d) time, unless 3-Sat can be solved in 2^o(n) time. Further, we derive that testing whether a given set is an -net with respect to half-spaces takes n^ (d) time under the same assumption. As intermediate results, we discover the W[1]-hardness of other well known problems, such as determining the largest empty star inside the unit cube. For this, we show that it is even hard to approximate within a factor of 2^n .
The problem has been shown to be @math -hard by @cite_4 . An exact algorithm that runs in @math time was given by @cite_25 . @cite_14 has given an approximation algorithm that achieves additive error and runs in fpt-time with respect to the error and the dimension. However, as @cite_15 noted, by setting the error tolerance to the same order as the discrepancy of an optimal point set, so that a constant-factor approximation is achieved, the running time of any algorithm following Thi 'emard's approach becomes @math . As for the , no hardness results where known so far.
{ "cite_N": [ "@cite_15", "@cite_14", "@cite_4", "@cite_25" ], "mid": [ "2031874716", "", "2162820178", "2077220042" ], "abstract": [ "In the first part of this paper we derive lower bounds and constructive upper bounds for the bracketing numbers of anchored and unanchored axis-parallel boxes in the d-dimensional unit cube. In the second part we apply these results to geometric discrepancy. We derive upper bounds for the inverse of the star and the extreme discrepancy with explicitly given small constants and an optimal dependence on the dimension d, and provide corresponding bounds for the star and the extreme discrepancy itself. These bounds improve known results from [B. Doerr, M. Gnewuch, A. Srivastav, Bounds and constructions for the star-discrepancy via @d-covers, J. Complexity 21 (2005) 691-709], [M. Gnewuch, Bounds for the average L^p-extreme and the L^ -extreme discrepancy, Electron. J. Combin. 12 (2005) Research Paper 54] and [H. N. Mhaskar, On the tractability of multivariate integration and approximation by neural networks, J. Complexity 20 (2004) 561-590]. We also discuss an algorithm from [E. Thiemard, An algorithm to compute bounds for the star discrepancy, J. Complexity 17 (2001) 850-880] to approximate the star-discrepancy of a given n-point set. Our lower bound on the bracketing number of anchored boxes, e.g., leads directly to a lower bound of the running time of Thiemard's algorithm. Furthermore, we show how one can use our results to modify the algorithm to approximate the extreme discrepancy of a given set.", "", "The well-known star discrepancy is a common measure for the uniformity of point distributions. It is used, e.g., in multivariate integration, pseudo random number generation, experimental design, statistics, or computer graphics. We study here the complexity of calculating the star discrepancy of point sets in the d-dimensional unit cube and show that this is an NP-hard problem. To establish this complexity result, we first prove NP-hardness of the following related problems in computational geometry: Given n points in the d-dimensional unit cube, find a subinterval of minimum or maximum volume that contains k of the n points. Our results for the complexity of the subinterval problems settle a conjecture of E. Thiemard [E. Thiemard, Optimal volume subintervals with k points and star discrepancy via integer programming, Math. Meth. Oper. Res. 54 (2001) 21-45].", "Patterns used for supersampling in graphics have been analyzed from statistical and signal-processing viewpoints. We present an analysis based on a type of isotropic discrepancy—how good patterns are at estimating the area in a region of defined type. We present algorithms for computing discrepancy relative to regions that are defined by rectangles, halfplanes, and higher-dimensional figures. Experimental evidence shows that popular supersampling patterns have discrepancies with better asymptotic behavior than random sampling, which is not inconsistent with theoretical bounds on discrepancy." ] }
1103.4503
2952457341
Discrepancy measures how uniformly distributed a point set is with respect to a given set of ranges. There are two notions of discrepancy, namely continuous discrepancy and combinatorial discrepancy. Depending on the ranges, several possible variants arise, for example star discrepancy, box discrepancy, and discrepancy of half-spaces. In this paper, we investigate the hardness of these problems with respect to the dimension d of the underlying space. All these problems are solvable in time n^O(d) , but such a time dependency quickly becomes intractable for high-dimensional data. Thus it is interesting to ask whether the dependency on d can be moderated. We answer this question negatively by proving that the canonical decision problems are W[1]-hard with respect to the dimension. This is done via a parameterized reduction from the Clique problem. As the parameter stays linear in the input parameter, the results moreover imply that these problems require n^ (d) time, unless 3-Sat can be solved in 2^o(n) time. Further, we derive that testing whether a given set is an -net with respect to half-spaces takes n^ (d) time under the same assumption. As intermediate results, we discover the W[1]-hardness of other well known problems, such as determining the largest empty star inside the unit cube. For this, we show that it is even hard to approximate within a factor of 2^n .
The problem has been studied extensively in the planar case, see for example @cite_24 and references therein. When the dimension is part of the input, the problem has only recently been shown to be @math -hard by @cite_11 and the fastest exact algorithm runs in time @math @cite_11 . Also recently, @cite_7 gave an @math -time @math -approximation algorithm for this problem. Note that, since @math for some @math , this counts as fpt time in parameters @math and @math , in contrast to our results for . The NP-hardness of the problem was shown by @cite_4 .
{ "cite_N": [ "@cite_24", "@cite_4", "@cite_7", "@cite_11" ], "mid": [ "2064761827", "2162820178", "1493107308", "1569585172" ], "abstract": [ "We provide two algorithms for solving the following problem: Given a rectangle containing n points, compute the largest-area and the largest-perimeter subrectangles with sides parallel to the given rectangle that lie within this rectangle and that do not contain any points in their interior. For finding the largest-area empty rectangle, the first algorithm takes O ( n log 3 n ) time and O ( n ) memory space and it simplifies the algorithm given by Chazelle, Drysdale and Lee which takes O ( n log 3 n ) time but O ( n log n ) storage. The second algorithm for computing the largest-area empty rectangle is more complicated but it only takes O ( n log 2 n ) time and O ( n ) memory space. The two algorithms for computing the largest-area rectangle can be modified to compute the largest-perimeter rectangle in O ( n log 2 n ) and O ( n log n ) time, respectively. Since O( n log n ) is a lower bound on time for computing the largest-perimeter empty rectangle, the second algorithm for computing such a rectangle is optimal within a multiplicative constant.", "The well-known star discrepancy is a common measure for the uniformity of point distributions. It is used, e.g., in multivariate integration, pseudo random number generation, experimental design, statistics, or computer graphics. We study here the complexity of calculating the star discrepancy of point sets in the d-dimensional unit cube and show that this is an NP-hard problem. To establish this complexity result, we first prove NP-hardness of the following related problems in computational geometry: Given n points in the d-dimensional unit cube, find a subinterval of minimum or maximum volume that contains k of the n points. Our results for the complexity of the subinterval problems settle a conjecture of E. Thiemard [E. Thiemard, Optimal volume subintervals with k points and star discrepancy via integer programming, Math. Meth. Oper. Res. 54 (2001) 21-45].", "We give the first efficient (1−e)-approximation algorithm for the following problem: Given an axis-parallel d-dimensional box R in ℝ d containing n points, compute a maximum-volume empty axis-parallel d-dimensional box contained in R. The minimum of this quantity over all such point sets is of the order ( ( 1 n ) ). Our algorithm finds an empty axis-aligned box whose volume is at least (1−e) of the maximum in O((8ede −2) d ⋅nlog d n) time. No previous efficient exact or approximation algorithms were known for this problem for d≥4. As the problem has been recently shown to be NP-hard in arbitrarily high dimensions (i.e., when d is part of the input), the existence of an efficient exact algorithm is unlikely.", "The maximum empty rectangle problem is as follows: Given a set of red points in ℝd and an axis-aligned hyperrectangle B, find an axis-aligned hyperrectangle R of greatest volume that is contained in B and contains no red points. In addition to this problem, we also consider three natural variants: where we find a hypercube instead of a hyperrectangle, where we try to contain as many blue points as possible instead of maximising volume, and where we do both. Combining the results of this paper with previous results, we now know that all four of these problems (a) are NP-complete if d is part of the input, (b) have polynomial-time sweep-plane solutions for any fixed d≥3, and (c) have near linear time solutions in two dimensions." ] }
1103.3745
2952340783
We propose AllDiffPrecedence, a new global constraint that combines together an AllDifferent constraint with precedence constraints that strictly order given pairs of variables. We identify a number of applications for this global constraint including instruction scheduling and symmetry breaking. We give an efficient propagation algorithm that enforces bounds consistency on this global constraint. We show how to implement this propagator using a decomposition that extends the bounds consistency enforcing decomposition proposed for the AllDifferent constraint. Finally, we prove that enforcing domain consistency on this global constraint is NP-hard in general.
Decompositions that achieve bounds consistency have been given for a number of global constraints. Relevant to this work, similar decompositions have been given for a single constraint @cite_10 , as well as for overlapping constraints @cite_20 . These decompositions have the property that enforcing bound consistency on the decomposition achieves bounds consistency on the original global constraint.
{ "cite_N": [ "@cite_10", "@cite_20" ], "mid": [ "2949900368", "1626677565" ], "abstract": [ "We show that some common and important global constraints like ALL-DIFFERENT and GCC can be decomposed into simple arithmetic constraints on which we achieve bound or range consistency, and in some cases even greater pruning. These decompositions can be easily added to new solvers. They also provide other constraints with access to the state of the propagator by sharing of variables. Such sharing can be used to improve propagation between constraints. We report experiments with our decomposition in a pseudo-Boolean solver.", "We study propagation algorithms for the conjunction of two ALLDIFFERENT constraints. Solutions of an ALLDIFFERENT constraint can be seen as perfect matchings on the variable value bipartite graph. Therefore, we investigate the problem of finding simultaneous bipartite match-ings. We present an extension of the famous Hall theorem which characterizes when simultaneous bipartite matchings exists. Unfortunately, finding such matchings is NP-hard in general. However, we prove a surprising result that finding a simultaneous matching on a convex bipartite graph takes just polynomial time. Based on this theoretical result, we provide the first polynomial time bound consistency algorithm for the conjunction of two ALLDIFFERENT constraints. We identify a pathological problem on which this propagator is exponentially faster compared to existing propagators. Our experiments show that this new propagator can offer significant benefits over existing methods." ] }
1103.3745
2952340783
We propose AllDiffPrecedence, a new global constraint that combines together an AllDifferent constraint with precedence constraints that strictly order given pairs of variables. We identify a number of applications for this global constraint including instruction scheduling and symmetry breaking. We give an efficient propagation algorithm that enforces bounds consistency on this global constraint. We show how to implement this propagator using a decomposition that extends the bounds consistency enforcing decomposition proposed for the AllDifferent constraint. Finally, we prove that enforcing domain consistency on this global constraint is NP-hard in general.
A number of global constraints have been combined together and specialized propagators developed to deal with these conjunctions. For example, a global lexicographical ordering and sum constraint have been combined together @cite_12 . As a second example, a generic method has been proposed for propagating combinations of the global lexicographical ordering and a family of globals including the Regular and Sequence constraints @cite_17 .
{ "cite_N": [ "@cite_12", "@cite_17" ], "mid": [ "1484088481", "1715626001" ], "abstract": [ "We introduce a new global constraint which combines together the lexicographic ordering constraint with some sum constraints. Lexicographic ordering constraints are frequently used to break symmetry, whilst sum constraints occur in many problems involving capacity or partitioning. Our results show that this global constraint is useful when there is a very large space to explore, such as when the problem is unsatisfiable, or when the search strategy is poor or conflicts with the symmetry breaking constraints. By studying in detail when combining lexicographical ordering with other constraints is useful, we propose a new heuristic for deciding when to combine constraints together.", "We propose a new family of constraints which combine together lexicographical ordering constraints for symmetry breaking with other common global constraints. We give a general purpose propagator for this family of constraints, and show how to improve its complexity by exploiting properties of the included global constraints." ] }
1103.4402
1979312416
In this paper we show that on bounded degree graphs and general trees, the cover time of the simple random walk is asymptotically equal to the product of the number of edges and the square of the expected supremum of the Gaussian free field on the graph, assuming that the maximal hitting time is significantly smaller than the cover time. Previously, this was only proved for regular trees and the 2D lattice. Furthermore, for general trees, we derive exponential concentration for the cover time, which implies that the standard deviation of the cover time is bounded by the geometric mean of the cover time and the maximal hitting time.
Previous to our work, the only non-trivial examples for which has been verified are regular trees and the 2D torus. For regular trees, the asymptotics of cover times was shown in @cite_27 , while the supremum of the Gaussian free field was known as a folklore and a precise estimate up to an additive constant can be deduced by adapting Bramson's methods on the maximal displacement of branching Brownian motion @cite_31 . Indeed, an analogue of Bramson's result for a wide range of branching random walks was proved by Addario-Berry and Reed @cite_25 . For the 2D lattice, the asymptotics of the supremum of the Gaussian free field was determined in @cite_14 , and the asymptotics of cover times was established in @cite_16 . We emphasize that in both cases, the asymptotics of cover times was very tricky, despite the fact that the supremum of the GFFs had been established.
{ "cite_N": [ "@cite_14", "@cite_27", "@cite_31", "@cite_16", "@cite_25" ], "mid": [ "1572521168", "1971599265", "2098130249", "2120076230", "2127129887" ], "abstract": [ "We consider the lattice version of the free eld in two dimensions (also called harmonic crystal). The main aim of the paper is to discuss quantitatively the entropic repulsion of the random surface in the presence of a hard wall. The basic ingredient of the proof is the analysis of the maximum of the eld which requires a multiscale analysis reducing the problem essentially to a problem on a eld with a tree structure. 2000 MSC: 60K35, 60G15, 82B41", "Abstract For simple random walk on a finite tree, the cover time is the time taken to visit every vertex. For the balanced b-ary tree of height m, the cover time is shown to be asymptotic to 2m 2 b m + 1 ( log b) (b − 1) as m → ∞ . On the uniform random labeled tree on n vertices, we give a convincing heuristic argument that the mean time to cover and return to the root is asymptotic to 6(2π) 1 2 n 3 2 , and prove a weak O(n 3 2 ) upper bound. The argument rests upon a recursive formula for cover time of trees generated by a simple branching process.", "It is shown that the position of any fixed percentile of the maximal displacement of standard branching Brownian motion in one dimension is 21 2t–3 · 2−3 2 log t + O(1) at time t, the second-order term having been previously unknown. This determines (to within O(1)) the position of the travelling wave of the semilinear heat equation, ut =1 2uxx +f(u), in the classic paper by Kolmogorov-Petrovsky-Piscounov, “Etude de l'equations de la diffusion avec croissance de la quantite de la matiere et son application a un probleme biologique”, 1937.", "LetT (x;\") denote the rst hitting time of the disc of radius \" centered at x for Brownian motion on the two dimensional torus T 2 . We prove that sup x2T2T (x;\")=j log\"j 2 ! 2= as \" ! 0. The same applies to Brownian motion on any smooth, compact connected, two- dimensional, Riemannian manifold with unit area and no boundary. As a consequence, we prove a conjecture, due to Aldous (1989), that the number of steps it takes a simple random walk to cover all points of the lattice torus Z 2 is asymptotic to 4n 2 (logn) 2 = . Determining these asymptotics is an essential step toward analyzing the fractal structure of the set of uncovered sites before coverage is complete; so far, this structure was only studied non-rigorously in the physics literature. We also establish a conjecture, due to Kesten and R ev esz, that describes the asymptotics for the number of steps needed by simple random walk in Z 2 to cover the disc of radius n.", "Given a branching random walk, let @math be the minimum position of any member of the @math th generation. We calculate @math to within O(1) and prove exponential tail bounds for @math , under quite general conditions on the branching random walk. In particular, together with work by Bramson [Z. Wahrsch. Verw. Gebiete 45 (1978) 89―108], our results fully characterize the possible behavior of @math when the branching random walk has bounded branching and step size." ] }
1103.4402
1979312416
In this paper we show that on bounded degree graphs and general trees, the cover time of the simple random walk is asymptotically equal to the product of the number of edges and the square of the expected supremum of the Gaussian free field on the graph, assuming that the maximal hitting time is significantly smaller than the cover time. Previously, this was only proved for regular trees and the 2D lattice. Furthermore, for general trees, we derive exponential concentration for the cover time, which implies that the standard deviation of the cover time is bounded by the geometric mean of the cover time and the maximal hitting time.
There are additional high precision estimates for cover times and Gaussian free fields on trees and 2D lattice: For regular binary trees @math of height @math , Bramson and Zeitouni @cite_13 proved that @math is tight after proper centering. For general trees, Feige and Zeitouni @cite_32 studied the computational perspective and designed a deterministic polynomial-time algorithm to approximate the cover time up to a factor of @math for any fixed @math . For the 2D lattice, in a recent breakthrough paper of Bramson and Zeitouni @cite_3 , it was shown that the supremum of the Gaussian free field is tight after proper centering, together with an estimate on its expectation up to an additive constant. It improved upon the tightness result along a subsequence by Bolthausen, Deuschel and Zeitouni @cite_1 , and a super-concentration result due to Chatterjee @cite_39 .
{ "cite_N": [ "@cite_1", "@cite_32", "@cite_3", "@cite_39", "@cite_13" ], "mid": [ "2076567039", "1600996865", "2964051416", "2132262489", "2159477701" ], "abstract": [ "We consider the maximum of the discrete two dimensional Gaussian free field in a box, and prove the existence of a (dense) deterministic subsequence along which the maximum, centered at its mean, is tight. The method of proof relies on an argument developed by Dekking and Host for branching random walks with bounded increments and on comparison results specific to Gaussian fields.", "We present a deterministic algorithm that given a tree T with n vertices, a starting vertex v and a slackness parameter epsilon > 0, estimates within an additive error of epsilon the cover and return time, namely, the expected time it takes a simple random walk that starts at v to visit all vertices of T and return to v. The running time of our algorithm is polynomial in n epsilon, and hence remains polynomial in n also for epsilon = 1 n^ O(1) . We also show how the algorithm can be extended to estimate the expected cover (without return) time on trees.", "We consider the maximum of the discrete two-dimensional Gaussian free field (GFF) in a box and prove that its maximum, centered at its mean, is tight, settling a longstanding conjecture. The proof combines a recent observation by Bolthausen, Deuschel, and Zeitouni with elements from Bramson's results on branching Brownian motion and comparison theorems for Gaussian fields. An essential part of the argument is the precise evaluation, up to an error of order 1, of the expected value of the maximum of the GFF in a box. Related Gaussian fields, such as the GFF on a two-dimensional torus, are also discussed. © 2011 Wiley Periodicals, Inc.", "Disordered systems are an important class of models in sta- tistical mechanics, having the defining characteristic that the energy landscape is a fixed realization of a random field. Examples include var- ious models of glasses and polymers. They also arise in other areas, like fitness models in evolutionary biology. The ground state of a disordered system is the state with minimum energy. The system is said to be chaotic if a small perturbation of the energy landscape causes a dras- tic shift of the ground state. We present a rigorous theory of chaos in disordered systems that confirms long-standing physics intuition about connections between chaos, anomalous fluctuations of the ground state energy, and the existence of multiple valleys in the energy landscape. Combining these results with mathematical tools like hypercontractiv- ity, we establish the existence of the above phenomena in eigenvectors of GUE matrices, the Kauffman-Levin model of evolutionary biology, directed polymers in random environment, a subclass of the generalized Sherrington-Kirkpatrick model of spin glasses, the discrete Gaussian free field, and continuous Gaussian fields on Euclidean spaces. We also list several open questions.", "In this paper, we study the tightness of solutions for a fam- ily of recursive equations. These equations arise naturally in the study of random walks on tree-like structures. Examples include the maxi- mal displacement of branching random walk in one dimension, and the cover time of symmetric simple random walk on regular binary trees. Recursion equations associated with the distribution functions of these quantities have been used to establish weak laws of large numbers. Here, we use these recursion equations to establish the tightness of the corre- sponding sequences of distribution functions after appropriate centering. We phrase our results in a fairly general context, which we hope will fa- cilitate their application in other settings." ] }
1103.4402
1979312416
In this paper we show that on bounded degree graphs and general trees, the cover time of the simple random walk is asymptotically equal to the product of the number of edges and the square of the expected supremum of the Gaussian free field on the graph, assuming that the maximal hitting time is significantly smaller than the cover time. Previously, this was only proved for regular trees and the 2D lattice. Furthermore, for general trees, we derive exponential concentration for the cover time, which implies that the standard deviation of the cover time is bounded by the geometric mean of the cover time and the maximal hitting time.
Miller and Peres @cite_35 studied the connection between cover times and the mixing times for the random walks on corresponding lamplighter graphs. In particular, they designed a procedure which allowed to compute the cover time up to @math for a family of graphs that satisfy some transient'' condition. Miller pointed out that this procedure should also allow to compute the supremum of Gaussian free field up to @math . However, it seems that their method could not be extended to the case for general trees --- at least not without further substantial ingredient.
{ "cite_N": [ "@cite_35" ], "mid": [ "2033953301" ], "abstract": [ "We show that the measure on markings of Znd, d ≥ 3, with elements of 0, 1 given by i.i.d. fair coin flips on the range @math of a random walk X run until time T and 0 otherwise becomes indistinguishable from the uniform measure on such markings at the threshold T = ½Tcov(Znd). As a consequence of our methods, we show that the total variation mixing time of the random walk on the lamplighter graph Z2 ≀ Znd, d ≥ 3, has a cutoff with threshold ½Tcov(Znd). We give a general criterion under which both of these results hold; other examples for which this applies include bounded degree expander families, the intersection of an infinite supercritical percolation cluster with an increasing family of balls, the hypercube and the Caley graph of the symmetric group generated by transpositions. The proof also yields precise asymptotics for the decay of correlation in the uncovered set." ] }
1103.4402
1979312416
In this paper we show that on bounded degree graphs and general trees, the cover time of the simple random walk is asymptotically equal to the product of the number of edges and the square of the expected supremum of the Gaussian free field on the graph, assuming that the maximal hitting time is significantly smaller than the cover time. Previously, this was only proved for regular trees and the 2D lattice. Furthermore, for general trees, we derive exponential concentration for the cover time, which implies that the standard deviation of the cover time is bounded by the geometric mean of the cover time and the maximal hitting time.
Benjamini, Gurel-Gurevich and Morris, showed that for bounded degree graphs it is exponentially unlikely to cover the graph in linear time @cite_19 . This is a different type of large deviation result on the cover time from the one that we prove.
{ "cite_N": [ "@cite_19" ], "mid": [ "2007625773" ], "abstract": [ "We show that the probability that a simple random walk covers a finite, bounded degree graph in linear time is exponentially small. We conjecture that the same holds for any simple graph." ] }
1103.4402
1979312416
In this paper we show that on bounded degree graphs and general trees, the cover time of the simple random walk is asymptotically equal to the product of the number of edges and the square of the expected supremum of the Gaussian free field on the graph, assuming that the maximal hitting time is significantly smaller than the cover time. Previously, this was only proved for regular trees and the 2D lattice. Furthermore, for general trees, we derive exponential concentration for the cover time, which implies that the standard deviation of the cover time is bounded by the geometric mean of the cover time and the maximal hitting time.
In a work of Ding and Zeitouni @cite_38 , the second order term for the cover time on a binary tree was pinned down, and a discrepancy from the supremum of GFF was demonstrated in this scale.
{ "cite_N": [ "@cite_38" ], "mid": [ "2001123643" ], "abstract": [ "We compute the second order correction for the cover time of the binary tree of depth n by (continuous-time) random walk, and show that with probability approaching 1 as n increases, τcov=|E|[2log2⋅n−logn 2log2+O((loglogn)8)], thus showing that the second order correction differs from the corresponding one for the maximum of the Gaussian free field on the tree." ] }
1103.4133
2103964590
We describe a framework to support the implementation of web-based systems intended to manipulate data stored in relational databases. Since the conceptual model of a relational database is often specified as an entity-relationship (ER) model, we propose to use the ER model to generate a complete implementation in the declarative programming language Curry. This implementation contains operations to create and manipulate entities of the data model, supports authentication, authorization, session handling, and the composition of individual operations to user processes. Furthermore, the implementation ensures the consistency of the database w.r.t. the data dependencies specified in the ER model, i.e., updates initiated by the user cannot lead to an inconsistent state of the database. In order to generate a high-level declarative implementation that can be easily adapted to individual customer requirements, the framework exploits previous works on declarative database programming and web user interface construction in Curry.
The iData toolkit @cite_4 is a framework, implemented with generic programming techniques in the functional language Clean, to construct type-safe web interfaces to data that can be persistently stored. In contrast to our framework, the construction of an application is done by the programmer who defines the various iData elements, where we generate the necessary code from an ER description. Hence, integrity constraints expressed in the ER description are automatically checked in contrast to the iData toolkit.
{ "cite_N": [ "@cite_4" ], "mid": [ "2096546704" ], "abstract": [ "In this paper we present the iData Toolkit. It allows programmers to create interactive, dynamic web applications with state on a high level of abstraction. The key element of this toolkit is the iData element. An iData element can be regarded as a self-contained object that stores values of a specified type. Generic programming techniques enable the automatic generation of HTML-forms from these types. These forms can be plugged into the web application. The iData elements can be interconnected. Complicated form dependencies can be defined in a pure functional, type safe, declarative programming style. This liberates the programmer from lots of low-level HTML programming and form handling. We illustrate the descriptive power of the toolkit by means of a small, yet complicated example: a project administration. The iData Toolkit is an excellent demonstration of the expressive power of modern generic (poly-typical) programming techniques." ] }
1103.3911
2949763420
A fundamental problem in computational geometry is to compute an obstacle-avoiding Euclidean shortest path between two points in the plane. The case of this problem on polygonal obstacles is well studied. In this paper, we consider the problem version on curved obstacles, commonly modeled as splinegons. A splinegon can be viewed as replacing each edge of a polygon by a convex curved edge (polygons are special splinegons). Each curved edge is assumed to be of O(1) complexity. Given in the plane two points s and t and a set of @math pairwise disjoint splinegons with a total of @math vertices, we compute a shortest s-to-t path avoiding the splinegons, in @math time, where k is a parameter sensitive to the structures of the input splinegons and is upper-bounded by @math . In particular, when all splinegons are convex, @math is proportional to the number of common tangents in the free space (called "free common tangents") among the splinegons. We develop techniques for solving the problem on the general (non-convex) splinegon domain, which also improve several previous results. In particular, our techniques produce an optimal output-sensitive algorithm for a basic visibility problem of computing all free common tangents among @math pairwise disjoint convex splinegons with a total of @math vertices. Our algorithm runs in @math time and @math space, where @math is the number of all free common tangents. Even for the special case where all splinegons are convex polygons, the previously best algorithm for this visibility problem takes @math time.
The polygon case of (i.e., @math contains polygons only) is well studied. By constructing the visibility graph @cite_22 , a shortest @math - @math path can be found in @math time, where @math is the size of the visibility graph. By building a shortest path map, Storer and Reif solved this case in @math time @cite_15 . Mitchell @cite_9 gave the first subquadratic, @math time algorithm for it based on the continuous Dijkstra approach. Also using the continuous Dijkstra approach and a conforming planar subdivision, Hershberger and Suri @cite_26 presented an @math time solution. An @math time algorithm was given in @cite_4 (a preliminary version is in @cite_29 and full details are in @cite_12 ). Thus, our algorithm improves the results in @cite_4 @cite_15 and is faster than the @math time solution @cite_26 for small value @math , say @math . Very recently, an unrefereed report @cite_32 announced an algorithm for the polygon case based on the continuous Dijkstra approach with an @math time. Our algorithm is superior to it when @math .
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_22", "@cite_9", "@cite_29", "@cite_32", "@cite_15", "@cite_12" ], "mid": [ "2058510050", "2029668981", "1482503584", "2167427782", "2003711654", "1583742799", "1985010089", "" ], "abstract": [ "We propose an optimal-time algorithm for a classical problem in plane computational geometry: computing a shortest path between two points in the presence of polygonal obstacles. Our algorithm runs in worst-case time O(n log n) and requires O(n log n) space, where n is the total number of vertices in the obstacle polygons. The algorithm is based on an efficient implementation of wavefront propagation among polygonal obstacles, and it actually computes a planar map encoding shortest paths from a fixed source point to all other points of the plane; the map can be used to answer single-source shortest path queries in O(log n) time. The time complexity of our algorithm is a significant improvement over all previously published results on the shortest path problem. Finally, we also discuss extensions to more general shortest path problems, involving nonpoint and multiple sources.", "We give an algorithm to compute a (Euclidean) shortest path in a polygon with h holes and a total of n vertices. The algorithm uses O(n) space and requires (O(n+h^2 n) ) time.", "The visibility graph of a set of nonintersecting polygonal obstacles in the plane is an undirected graph whose vertex set consists of the vertices of the obstacles and whose edges are pairs of vertices @math such that the open line segment between u and v does not intersect any of the obstacles. The visibility graph is an important combinatorial structure in computational geometry and is used in applications such as solving visibility problems and computing shortest paths. This paper presents an algorithm that computes the visibility graph of a set of obstacles in time @math , where E is the number of edges in the visibility graph and n is the total number of vertices in all the obstacles.", "We give a subquadratic (O(n3 2+∊) time and O(n) space) algorithm for computing Euclidean shortest paths in the plane in the presence of polygonal obstacles; previous time bounds were at least quadratic in n, in the worst case. The method avoids use of visibility graphs, relying instead on the continuous Dijkstra paradigm. The output is a shortest path map (of size O(n)) with respect to a given source point, which allows shortest path length queries to be answered in time O(log n). The algorithm extends to the case of multiple source points, yielding a method to compute a Voronoi diagram with respect to the shortest path metric.", "The problem of determining the Euclidean shortest path between two points in the presence of m simple polygonal obstacles is studied. An O( m 2 logn + nlogn ) algorithm is developed, where n is the total number of points in the obstacles. A simple O(E+T) algorithm for determining the visibility graph is also shown, where E is the number of visibility edges and T is the time for triangulating the point set. This is extended to a O(E s + nlogn) algorithm for the shortest path problem where E s is bounded by m 2 .", "We present an algorithm to find an Euclidean Shortest Path from a source vertex @math to a sink vertex @math in the presence of obstacles in @math . Our algorithm takes @math time and @math space. Here, @math is the time to triangulate the polygonal region, @math is the number of obstacles, and @math is the number of vertices. This bound is close to the known lower bound of @math time and @math space. Our approach involve progressing shortest path wavefront as in continuous Dijkstra-type method, and confining its expansion to regions of interest.", "We present a practical algorithm for finding minimum-length paths between points in the Euclidean plane with (not necessarily convex) polygonal obstacles. Prior to this work, the best known algorithm for finding the shortest path between two points in the plane required O(n 2 log n) time and O (n 2 ) space, where n denotes the number of obstacle edges. Assuming that a triangulation or a Voronoi diagram for the obstacle space is provided with the input (if is not, either one can be precomputed in O ( n log n) time), we present an O(kn) time algorithm, where k denotes the number of “islands” (connected components) in the obstacle space. The algorithm uses only O(n) space and, given a source point s , produces an O(n) size data structure such that the distance between s and any other point x in the plane ( x ) is not necessarily an obstacle vertex or a point on an obstacle edge) can be computed in O (1) time. The algorithm can also be used to compute shortest paths for the movement of a disk (so that optimal movement for arbitrary objects can be computed to the accuracy of enclosing them with the smallest possible disk).", "" ] }
1103.3911
2949763420
A fundamental problem in computational geometry is to compute an obstacle-avoiding Euclidean shortest path between two points in the plane. The case of this problem on polygonal obstacles is well studied. In this paper, we consider the problem version on curved obstacles, commonly modeled as splinegons. A splinegon can be viewed as replacing each edge of a polygon by a convex curved edge (polygons are special splinegons). Each curved edge is assumed to be of O(1) complexity. Given in the plane two points s and t and a set of @math pairwise disjoint splinegons with a total of @math vertices, we compute a shortest s-to-t path avoiding the splinegons, in @math time, where k is a parameter sensitive to the structures of the input splinegons and is upper-bounded by @math . In particular, when all splinegons are convex, @math is proportional to the number of common tangents in the free space (called "free common tangents") among the splinegons. We develop techniques for solving the problem on the general (non-convex) splinegon domain, which also improve several previous results. In particular, our techniques produce an optimal output-sensitive algorithm for a basic visibility problem of computing all free common tangents among @math pairwise disjoint convex splinegons with a total of @math vertices. Our algorithm runs in @math time and @math space, where @math is the number of all free common tangents. Even for the special case where all splinegons are convex polygons, the previously best algorithm for this visibility problem takes @math time.
For a single splinegon @math , a shortest @math - @math path in @math can be found in @math time, and further, shortest paths from @math to all vertices of @math can be found in @math time @cite_5 .
{ "cite_N": [ "@cite_5" ], "mid": [ "2063244634" ], "abstract": [ "The goal of this paper is to show that the concept of the shortest path inside a polygonal region contributes to the design of efficient algorithms for certain geometric optimization problems involving simple polygons: computing optimum separators, maximum area or perimeter-inscribed triangles, a minimum area circumscribed concave quadrilateral, or a maximum area contained triangle. The structure for the algorithms presented is as follows: (a) decompose the initial problem into a low-degree polynomial number of optimization problems; (b) solve each individual subproblem in constant time using standard methods of calculus, basic methods of numerical analysis, or linear programming. These same optimization techniques can be applied to splinegons (curved polygons). First a decomposition technique for curved polygons is developed; this technique is substituted for triangulation in creating equally efficient curved versions of the algorithms for the shortest-path tree, ray-shooting, and two-point shortest path..." ] }
1103.3911
2949763420
A fundamental problem in computational geometry is to compute an obstacle-avoiding Euclidean shortest path between two points in the plane. The case of this problem on polygonal obstacles is well studied. In this paper, we consider the problem version on curved obstacles, commonly modeled as splinegons. A splinegon can be viewed as replacing each edge of a polygon by a convex curved edge (polygons are special splinegons). Each curved edge is assumed to be of O(1) complexity. Given in the plane two points s and t and a set of @math pairwise disjoint splinegons with a total of @math vertices, we compute a shortest s-to-t path avoiding the splinegons, in @math time, where k is a parameter sensitive to the structures of the input splinegons and is upper-bounded by @math . In particular, when all splinegons are convex, @math is proportional to the number of common tangents in the free space (called "free common tangents") among the splinegons. We develop techniques for solving the problem on the general (non-convex) splinegon domain, which also improve several previous results. In particular, our techniques produce an optimal output-sensitive algorithm for a basic visibility problem of computing all free common tangents among @math pairwise disjoint convex splinegons with a total of @math vertices. Our algorithm runs in @math time and @math space, where @math is the number of all free common tangents. Even for the special case where all splinegons are convex polygons, the previously best algorithm for this visibility problem takes @math time.
It is not clear how to apply the continuous Dijkstra approach @cite_26 @cite_9 to our problem (even when all splinegons are discs) due to the curved obstacle boundaries. For example, Mitchell's approach @cite_9 uses a data structure for processing wavelet dragging queries by modeling them as high-dimensional radical-free semialgebraic range queries. In , however, such queries would involve not only radical numbers but also inverse trigonometric operations (e.g., arcsine), and hence similar techniques do not seem to apply. Hershberger and Suri's approach @cite_26 relies heavily on a conforming subdivision defined by the vertices of the polygonal obstacles. In our problem, however, it seems highly elusive to determine a set of @math vertices or points that can help build such a subdivision. One might attempt to use the splinegon vertices to build such a subdivision. But, an important property used by the subdivision @cite_26 is that the generator of every wavelet must be one of the obstacle vertices. Yet in , a generator need not be a splinegon vertex.
{ "cite_N": [ "@cite_9", "@cite_26" ], "mid": [ "2167427782", "2058510050" ], "abstract": [ "We give a subquadratic (O(n3 2+∊) time and O(n) space) algorithm for computing Euclidean shortest paths in the plane in the presence of polygonal obstacles; previous time bounds were at least quadratic in n, in the worst case. The method avoids use of visibility graphs, relying instead on the continuous Dijkstra paradigm. The output is a shortest path map (of size O(n)) with respect to a given source point, which allows shortest path length queries to be answered in time O(log n). The algorithm extends to the case of multiple source points, yielding a method to compute a Voronoi diagram with respect to the shortest path metric.", "We propose an optimal-time algorithm for a classical problem in plane computational geometry: computing a shortest path between two points in the presence of polygonal obstacles. Our algorithm runs in worst-case time O(n log n) and requires O(n log n) space, where n is the total number of vertices in the obstacle polygons. The algorithm is based on an efficient implementation of wavefront propagation among polygonal obstacles, and it actually computes a planar map encoding shortest paths from a fixed source point to all other points of the plane; the map can be used to answer single-source shortest path queries in O(log n) time. The time complexity of our algorithm is a significant improvement over all previously published results on the shortest path problem. Finally, we also discuss extensions to more general shortest path problems, involving nonpoint and multiple sources." ] }
1103.3911
2949763420
A fundamental problem in computational geometry is to compute an obstacle-avoiding Euclidean shortest path between two points in the plane. The case of this problem on polygonal obstacles is well studied. In this paper, we consider the problem version on curved obstacles, commonly modeled as splinegons. A splinegon can be viewed as replacing each edge of a polygon by a convex curved edge (polygons are special splinegons). Each curved edge is assumed to be of O(1) complexity. Given in the plane two points s and t and a set of @math pairwise disjoint splinegons with a total of @math vertices, we compute a shortest s-to-t path avoiding the splinegons, in @math time, where k is a parameter sensitive to the structures of the input splinegons and is upper-bounded by @math . In particular, when all splinegons are convex, @math is proportional to the number of common tangents in the free space (called "free common tangents") among the splinegons. We develop techniques for solving the problem on the general (non-convex) splinegon domain, which also improve several previous results. In particular, our techniques produce an optimal output-sensitive algorithm for a basic visibility problem of computing all free common tangents among @math pairwise disjoint convex splinegons with a total of @math vertices. Our algorithm runs in @math time and @math space, where @math is the number of all free common tangents. Even for the special case where all splinegons are convex polygons, the previously best algorithm for this visibility problem takes @math time.
Constructing the visibility graph for polygonal objects is well studied @cite_16 @cite_22 @cite_28 @cite_29 @cite_25 @cite_0 @cite_10 . The fastest algorithm for it takes @math time @cite_22 , where @math is the size of the visibility graph. For the relevant visibility graph problem @cite_4 @cite_31 @cite_0 (or building the relevant visibility graph ) on splinegons, two special cases have been studied. When @math contains @math convex objects of @math complexity each, the problem is solvable in @math time @cite_31 , where @math is the number of free common tangents. If @math contains @math convex polygons , as in @cite_4 @cite_0 , then the problem is solvable in @math time; an open question was posed in @cite_4 to solve this case in @math time, where @math is the number of free common tangents. Note that our optimal @math time result is better than the solution desired by this open question.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_28", "@cite_29", "@cite_0", "@cite_31", "@cite_16", "@cite_10", "@cite_25" ], "mid": [ "2029668981", "1482503584", "2132339863", "2003711654", "2016584965", "2147097381", "1986014106", "2014320134", "2039204626" ], "abstract": [ "We give an algorithm to compute a (Euclidean) shortest path in a polygon with h holes and a total of n vertices. The algorithm uses O(n) space and requires (O(n+h^2 n) ) time.", "The visibility graph of a set of nonintersecting polygonal obstacles in the plane is an undirected graph whose vertex set consists of the vertices of the obstacles and whose edges are pairs of vertices @math such that the open line segment between u and v does not intersect any of the obstacles. The visibility graph is an important combinatorial structure in computational geometry and is used in applications such as solving visibility problems and computing shortest paths. This paper presents an algorithm that computes the visibility graph of a set of obstacles in time @math , where E is the number of edges in the visibility graph and n is the total number of vertices in all the obstacles.", "Given a triangulation of a simple polygonP, we present linear-time algorithms for solving a collection of problems concerning shortest paths and visibility withinP. These problems include calculation of the collection of all shortest paths insideP from a given source vertexS to all the other vertices ofP, calculation of the subpolygon ofP consisting of points that are visible from a given segment withinP, preprocessingP for fast \"ray shooting\" queries, and several related problems.", "The problem of determining the Euclidean shortest path between two points in the presence of m simple polygonal obstacles is studied. An O( m 2 logn + nlogn ) algorithm is developed, where n is the total number of points in the obstacles. A simple O(E+T) algorithm for determining the visibility graph is also shown, where E is the number of visibility edges and T is the time for triangulating the point set. This is extended to a O(E s + nlogn) algorithm for the shortest path problem where E s is bounded by m 2 .", "Abstract An algorithm is presented which computes shortest paths in the Euclidean plane that do not cross given obstacles. The set of obstacles is assumed to consist of f disjoint convex polygons with n vertices in total. After preprocessing time O(n + f 2 log n), the shortest path between two arbitrary query points can be found in O(f 2 + n log n) time. The space complexity is O(n + f 2 ).", "This paper describes a new algorithm for constructing the set of free bitangents of a collection of n disjoint convex obstacles of constant complexity. The algorithm runs in time O(n log n + k), where k is the output size, and uses O(n) space. While earlier algorithms achieve the same optimal running time, this is the first optimal algorithm that uses only linear space. The visibility graph or the visibility complex can be computed in the same time and space. The only complicated data structure used by the algorithm is a splittable queue, which can be implemented easily using red-black trees. The algorithm is conceptually very simple, and should therefore be easy to implement and quite fast in practice. The algorithm relies on greedy pseudotriangulations, which are subgraphs of the visibility graph with many nice combinatorial properties. These properties, and thus the correctness of the algorithm, are partially derived from properties of a certain partial order on the faces of the visibility complex.", "Consider a collection of disjoint polygons in the plane containing a total ofn edges. We show how to build, inO(n 2) time and space, a data structure from which inO(n) time we can compute the visibility polygon of a given point with respect to the polygon collection. As an application of this structure, the visibility graph of the given polygons can be constructed inO(n 2) time and space. This implies that the shortest path that connects two points in the plane and avoids the polygons in our collection can be computed inO(n 2) time, improving earlierO(n 2 logn) results.", "Abstract Given a set S of line segments in the plane, its visibility graph G S is the undirected graph which has the endpoints of the line segments in S as nodes and in which two nodes (points) are adjacent whenever they ‘see’ each other (the line segments in S are regarded as nontransparent obstacles). It is shown that G S can be constructed in O(n 2 ) time and space for a set S of n nonintersecting line segments. As an immediate implication, the shortest path between two points in the plane avoiding a set of n nonintersecting line segments can be computed in O(n 2 ) time and space", "Let S be a set of n non-intersecting line segments in the plane. The visibility graph G S of S is the graph that has the endpoints of the segments in S as nodes and in which two nodes are adjacent whenever they can “see” each other (i.e., the open line segment joining them is disjoint from all segments or is contained in a segment). Two new methods are presented to construct G S . Both methods are very simple to implement. The first method is based on a new solution to the following problem: given a set of points, for each point sort the other points around it by angle. It runs in time O( n 2 ). The second method uses the fact that visibility graphs often are sparse and runs in time O( m log n ) where m is the number of edges in G S . Both methods use only Ogr;( n ) storage." ] }
1103.4340
2950964453
A Multi-hop Control Network (MCN) consists of a plant where the communication between sensor, actuator and computational unit is supported by a wireless multi-hop communication network, and data flow is performed using scheduling and routing of sensing and actuation data. We address the problem of characterizing controllability and observability of a MCN, by means of necessary and sufficient conditions on the plant dynamics and on the communication scheduling and routing. We provide a methodology to design scheduling and routing, in order to satisfy controllability and observability of a MCN for any fault occurrence in a given set of configurations of failures.
There exists a wide literature on Networked Control Systems, see for example @cite_3 , @cite_17 , @cite_5 , @cite_0 and references therein. The literature on robust stability of networked control systems (see e.g. @cite_12 , @cite_11 , @cite_15 ) generally addresses stability analysis in presence of packet loss and variable delays, but does not take into account the non--idealities introduced by scheduling and routing communication protocols of multi-hop control networks. When relating our paper with the current research about the interaction of control networks and communication protocols, most efforts in the literature focus on scheduling message and sampling time assignment for sensors actuators and controllers interconnected by wired common-bus networks, e.g. @cite_13 , @cite_20 , @cite_1 , @cite_16 , @cite_4 . The authors in @cite_21 use model predictive control to stabilize a plant over a multi-hop control network, by only considering delay introduced by the routing policy.
{ "cite_N": [ "@cite_11", "@cite_4", "@cite_21", "@cite_1", "@cite_3", "@cite_16", "@cite_0", "@cite_5", "@cite_15", "@cite_20", "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "2119275794", "2033017950", "2120997348", "2103013786", "2152520525", "", "", "2031475099", "2256710691", "2128969277", "", "1544149452", "2054161434" ], "abstract": [ "In this paper, the stability of a Networked Control System (NCS) with time-varying delays is analyzed. A discrete-time state-space model is used to analyze the dynamics of the NCS. The delay is introduced by the network itself and is assumed to be upperbounded by a fraction of the sample-time. A typical motion control example is presented in which the time-variation of the delay results in an unstable system, although for each fixed delay the system is stable. Conditions in terms of LMIs are presented guaranteeing the robust asymptotic stability of the discrete-time system, given bounds on the uncertain time-varying delay. Moreover, it is shown that the robust stability conditions also guarantee asymptotic stability of the intersample behavior. Additionally, LMIs are presented to synthesize a feedback controller that stabilizes the system for the uncertain time-varying delay. The results are illustrated on an example concerning a mechanical model of a motor driving a roller in a printer.", "This paper provides a general framework for analyzing the stability of general nonlinear networked control systems (NCS) with disturbances in the setting of stability. Our presentation provides sharper results for both gain and maximum allowable transfer interval (MATI) than previously obtainable and details the property of uniformly persistently exciting scheduling protocols. This class of protocols was shown to lead to stability for high enough transmission rates and were a natural property to demand, especially in the design of wireless scheduling protocols. The property is used directly in a novel proof technique based on the notions of vector comparison and (quasi)-monotone systems. We explore these results through analytical comparisons to those in the literature, as well as through simulations and numerical comparisons that verify that the uniform persistence of excitation property of protocols is, in some sense, the ldquofinestrdquo property that can be extracted from wireless scheduling protocols.", "Remote control over wireless multi-hop networks is considered. Time-varying delays for the transmission of sensor and control data over the wireless network are caused by a randomized multi-hop routing protocol. The characterstics of the routing protocol together with lower-layer network mechanisms give rise to a delay process with high variance and stepwise changing mean. A new predictive control scheme with a delay estimator is proposed in the paper. The estimator is based on a Kalman filter with a change detection algorithm. It is able to track the delay mean changes but efficiently attenuate the high frequency jitter. The control scheme is analyzed and its implementation detailed. Network data from an experimental setup are used to illustrate the efficiency of the approach.", "Describes a new framework for distributed control systems in which estimators are used at each node to estimate the values of the outputs at the other nodes. The estimated values are then used to compute the control algorithms at each node. When the estimated value deviates from the true value by more than a pre-specified tolerance, the actual value is broadcast to the rest of the system; all of the estimators are then updated to the current value. By using the estimated values instead of true value at every node, a significant saving in the required bandwidth is achieved, allowing large-scale distributed control systems to be implemented effectively. The stability, performance, and expected communication frequency of the reduced communication system are analyzed in detail. Simulation and experimental results validating the effectiveness and communication savings of the framework are also presented.", "First, we review some previous work on networked control systems (NCSs) and offer some improvements. Then, we summarize the fundamental issues in NCSs and examine them with different underlying network-scheduling protocols. We present NCS models with network-induced delay and analyze their stability using stability regions and a hybrid systems technique. Following that, we discuss methods to compensate network-induced delay and present experimental results over a physical network. Then, we model NCSs with packet dropout and multiple-packet transmission as asynchronous dynamical systems and analyze their stability. Finally, we present our conclusions.", "", "", "", "In this paper, we consider a robust network control problem. We consider linear unstable and uncertain discrete time plants with a network between the sensor and controller and the controller and plant. We investigate the effect of data drop out in the form of packet losses. Four distinct control schemes are explored and sufficient conditions to ensure almost sure stability of the closed loop system are derived for each of them in terms of minimum packet arrival rate and the maximum uncertainty. I. INTRODUCTION In the past decade, networked control systems (NCS) have gained much attention from both the control com- munity and the network and communication community. When compared with classical feedback control system, networked control systems have several advantages. For example, they can reduce the system wiring, make the system easy to operate and maintain and later diagnose in case of malfunctioning, and increase system agility (20). Although NCS have advantages, inserting a network in between the plant and the controller introduces many problems as well. For instance, zero-delayed sensing and actuation, perfect information and synchronization are no longer guaranteed in the new system architecture as only finite bandwidth is available and data packet drops and delays may occur due to network traffic conditions. These must be revisited and analyzed before networked control systems become prevalent. Recently, many researchers have spent effort on these issues and some significant results were obtained and many are in progress. Many of the aforementioned issues are studied separately. Tatikonda (19) and Sahai (13) have presented some interesting results in the area of control under communication constraints. Specifically, Tatikonda gave a necessary and sufficient condition on the channel data rate such that a noiseless LTI system in the closed loop is asymptotically stable. He also gave rate results for stabilizing a noisy LTI system over a digital channel. Sahai proposed the notion of anytime capacity to deal with real time estimation and control for a networked control system. In our paper (17), the authors have considered various rate issues under finite bandwidth, packet drops and finite controls. An optimal bit allocation scheme is given in (16) under the networked setting. The effect of pacekt drops on state estimation was studied by Sinopoli, et. al. in (3). It has further been investigated by many researchers including the present authors in (15) and (6).", "We introduce a novel control network protocol, try-once-discard (TOD), for multiple-input-multiple-output (MIMO) networked control systems (NCSs), and provide an analytic proof of global exponential stability for both the new protocol and the more commonly used (statically scheduled) access methods. Our approach is to first design the controller using established techniques and considering the network transparency, and then analyze the effect of the network on closed-loop system performance. When implemented, an NCS consists of multiple independent sensors and actuators competing for an access to the network, with no universal clock available to synchronize their actions. Since the nodes act asynchronously, we allow access to the network at anytime, but assume each access occurs before a prescribed deadline, known as the maximum allowable transfer interval. Only one node may access the network at a time. This communication constraint imposed by the network is the main focus of the paper. The performance of the new TOD protocol and the statically scheduled protocols are examined in simulations of an automotive gas turbine and an unstable batch reactor.", "", "In this paper, stability and disturbance attenuation issues for a class of networked control systems (NCSs) under uncertain access delay and packet dropout effects are considered. Our aim is to find conditions on the delay and packet dropout rate, under which the system stability and H sup spl infin disturbance attenuation properties are preserved to a desired level. The basic idea in this paper is to formulate such networked control system as a discrete-time switched system. Then the NCSs' stability and performance problems can be reduced to corresponding problems for the switched systems, which have been studied for decades and for which a number of results are available in the literature. The techniques in this paper are based in the discrete-time switched systems and piecewise Lyapunov functions.", "The defining characteristic of a networked control system (NCS) is having one or more control loops closed via a serial communication channel. Typically, when the words networking and control are used together, the focus is on the control of networks, but in this article our intent is nearly inverse, not control of networks but control through networks. NCS design objectives revolve around the performance and stability of a target physical device rather than of the network. The problem of stabilizing queue lengths, for example, is of secondary importance. Integrating computer networks into control systems to replace the traditional point-to-point wiring has enormous advantages, including lower cost, reduced weight and power, simpler installation and maintenance, and higher reliability. In this article, in addition to introducing networked control systems, we demonstrate how dispensing with queues and dynamically scheduling control traffic improves closed-loop performance." ] }
1103.4340
2950964453
A Multi-hop Control Network (MCN) consists of a plant where the communication between sensor, actuator and computational unit is supported by a wireless multi-hop communication network, and data flow is performed using scheduling and routing of sensing and actuation data. We address the problem of characterizing controllability and observability of a MCN, by means of necessary and sufficient conditions on the plant dynamics and on the communication scheduling and routing. We provide a methodology to design scheduling and routing, in order to satisfy controllability and observability of a MCN for any fault occurrence in a given set of configurations of failures.
However, what is needed for modeling and analyzing control protocols on multi hop control networks is an integrated framework for analysing co-designing network topology, scheduling, routing, transmission errors and control. To the best of our knowledge, the only formal model of multi-hop wireless sensor and actuator networks is reported in @cite_9 . In this paper, a simulation environment that facilitates simulation of computer nodes and communication networks interacting with the continuous-time dynamics of the real world is presented. The main difference between the work presented in @cite_9 and this work is that here we provide results on a formal mathematical model that takes into account plant dynamics and scheduling-routing dynamics.
{ "cite_N": [ "@cite_9" ], "mid": [ "1794028435" ], "abstract": [ "Embedded systems are becoming increasingly networked and are deployed in application areas that require close interaction with their physical environment. Examples include distributed mobileagents and wireless sensor ac tuator networks. The complexity of these applic ations make co- simulation a necessary tool during system development. This paper presents a simulation environment that facilitates simu- lation of computer nodes and communic ation networks inter- acting with the continuous-time dynamics of the real world. Features of the simulator include interrupt handling, task scheduling, wired and wireless communication, local clocks, dynamic voltage scaling, and battery-driven operation. Two simulation case studies are presented: a simple communication scenario and a mobile robot soccer game." ] }
1103.4168
2113780638
One utilisation of multidimensional databases is the field of On-line Analytical Processing (OLAP). The applications in this area are designed to make the analysis of shared multidimensional information fast PendseB. On one hand, speed can be achieved by specially devised data structures and algorithms. On the other hand, the analytical process is cyclic. In other words, the user of the OLAP application runs his or her queries one after the other. The output of the last query may be there (at least partly) in one of the previous results. Therefore caching also plays an important role in the operation of these systems. However, caching itself may not be enough to ensure acceptable performance. Size does matter: The more memory is available, the more we gain by loading and keeping information in there. Oftentimes, the cache size is fixed. This limits the performance of the multidimensional database, as well, unless we compress the data in order to move a greater proportion of them into the memory. Caching combined with proper compression methods promise further performance improvements. In this paper, we investigate how caching influences the speed of OLAP systems. Different physical representations (multidimensional and table) are evaluated. For the thorough comparison, models are proposed. We draw conclusions based on these models, and the conclusions are verified with empirical data.
The paper of Westmann @cite_22 lists several related works in this field. It also discusses how compression can be integrated into a relational database system. It does not concern itself with the multidimensional physical representation, which is the main focus of our paper. They demonstrate that compression indeed offers high performance gains. It can, however, also increase the running time of certain update operations. In this paper we will analyse the retrieval (or point query) operation only, as a lot of On-line Analytical Processing (OLAP) applications handle the data in a or way. The database is updated outside working hours in batch. Despite this difference, we also encountered performance degradation due to compression when the entire physical representation was cached into the memory. In this case, at one of the benchmark databases (TPC-D), the multidimensional representation became slower than the table representation because of the CPU-intensive Huffman decoding.
{ "cite_N": [ "@cite_22" ], "mid": [ "1993819379" ], "abstract": [ "In this paper, we show how compression can be integrated into a relational database system. Specifically, we describe how the storage manager, the query execution engine, and the query optimizer of a database system can be extended to deal with compressed data. Our main result is that compression can significantly improve the response time of queries if very light-weight compression techniques are used. We will present such light-weight compression techniques and give the results of running the TPC-D benchmark on a so compressed database and a non-compressed database using the AODB database system, an experimental database system that was developed at the Universities of Mannheim and Passau. Our benchmark results demonstrate that compression indeed offers high performance gains (up to 50 ) for IO-intensive queries and moderate gains for CPU-intensive queries. Compression can, however, also increase the running time of certain update operations. In all, we recommend to extend today's database systems with light-weight compression techniques and to make extensive use of this feature." ] }
1103.4168
2113780638
One utilisation of multidimensional databases is the field of On-line Analytical Processing (OLAP). The applications in this area are designed to make the analysis of shared multidimensional information fast PendseB. On one hand, speed can be achieved by specially devised data structures and algorithms. On the other hand, the analytical process is cyclic. In other words, the user of the OLAP application runs his or her queries one after the other. The output of the last query may be there (at least partly) in one of the previous results. Therefore caching also plays an important role in the operation of these systems. However, caching itself may not be enough to ensure acceptable performance. Size does matter: The more memory is available, the more we gain by loading and keeping information in there. Oftentimes, the cache size is fixed. This limits the performance of the multidimensional database, as well, unless we compress the data in order to move a greater proportion of them into the memory. Caching combined with proper compression methods promise further performance improvements. In this paper, we investigate how caching influences the speed of OLAP systems. Different physical representations (multidimensional and table) are evaluated. For the thorough comparison, models are proposed. We draw conclusions based on these models, and the conclusions are verified with empirical data.
In this paper, we use difference ,-- ,Huffman coding to compress the multidimensional physical representation of the relations. This method is based on difference sequence compression, which was published in @cite_0 .
{ "cite_N": [ "@cite_0" ], "mid": [ "2963848360" ], "abstract": [ "The multidimensional databases often use compression techniques in order to decrease the size of the database. This paper introduces a new method called difference sequence compression. Under some conditions, this new technique is able to create a smaller size multidimensional database than others like single count header compression, logical position compression or base-offset compression." ] }
1103.4168
2113780638
One utilisation of multidimensional databases is the field of On-line Analytical Processing (OLAP). The applications in this area are designed to make the analysis of shared multidimensional information fast PendseB. On one hand, speed can be achieved by specially devised data structures and algorithms. On the other hand, the analytical process is cyclic. In other words, the user of the OLAP application runs his or her queries one after the other. The output of the last query may be there (at least partly) in one of the previous results. Therefore caching also plays an important role in the operation of these systems. However, caching itself may not be enough to ensure acceptable performance. Size does matter: The more memory is available, the more we gain by loading and keeping information in there. Oftentimes, the cache size is fixed. This limits the performance of the multidimensional database, as well, unless we compress the data in order to move a greater proportion of them into the memory. Caching combined with proper compression methods promise further performance improvements. In this paper, we investigate how caching influences the speed of OLAP systems. Different physical representations (multidimensional and table) are evaluated. For the thorough comparison, models are proposed. We draw conclusions based on these models, and the conclusions are verified with empirical data.
Chen @cite_23 propose a Hierarchical Dictionary Encoding and discusses query optimization issues. Both of these topics are beyond the scope of our paper.
{ "cite_N": [ "@cite_23" ], "mid": [ "2105326258" ], "abstract": [ "Over the last decades, improvements in CPU speed have outpaced improvements in main memory and disk access rates by orders of magnitude, enabling the use of data compression techniques to improve the performance of database systems. Previous work describes the benefits of compression for numerical attributes, where data is stored in compressed format on disk. Despite the abundance of string-valued attributes in relational schemas there is little work on compression for string attributes in a database context. Moreover, none of the previous work suitably addresses the role of the query optimizer: During query execution, data is either eagerly decompressed when it is read into main memory, or data lazily stays compressed in main memory and is decompressed on demand only In this paper, we present an effective approach for database compression based on lightweight, attribute-level compression techniques. We propose a IIierarchical Dictionary Encoding strategy that intelligently selects the most effective compression method for string-valued attributes. We show that eager and lazy decompression strategies produce sub-optimal plans for queries involving compressed string attributes. We then formalize the problem of compression-aware query optimization and propose one provably optimal and two fast heuristic algorithms for selecting a query plan for relational schemas with compressed attributes; our algorithms can easily be integrated into existing cost-based query optimizers. Experiments using TPC-H data demonstrate the impact of our string compression methods and show the importance of compression-aware query optimization. Our approach results in up to an order speed up over existing approaches." ] }
1103.4168
2113780638
One utilisation of multidimensional databases is the field of On-line Analytical Processing (OLAP). The applications in this area are designed to make the analysis of shared multidimensional information fast PendseB. On one hand, speed can be achieved by specially devised data structures and algorithms. On the other hand, the analytical process is cyclic. In other words, the user of the OLAP application runs his or her queries one after the other. The output of the last query may be there (at least partly) in one of the previous results. Therefore caching also plays an important role in the operation of these systems. However, caching itself may not be enough to ensure acceptable performance. Size does matter: The more memory is available, the more we gain by loading and keeping information in there. Oftentimes, the cache size is fixed. This limits the performance of the multidimensional database, as well, unless we compress the data in order to move a greater proportion of them into the memory. Caching combined with proper compression methods promise further performance improvements. In this paper, we investigate how caching influences the speed of OLAP systems. Different physical representations (multidimensional and table) are evaluated. For the thorough comparison, models are proposed. We draw conclusions based on these models, and the conclusions are verified with empirical data.
In the article of O'Connell @cite_11 , compressing of the data itself is analysed in a database built on a triple store. We remove the empty cells from the multidimensional array, but do not compress the data themselves.
{ "cite_N": [ "@cite_11" ], "mid": [ "2098315581" ], "abstract": [ "There has been much work on compressing database indexes, but less on compressing the data itself. We examine the performance gains to be made by compression outside the index. A novel compression algorithm is reported, which enables the processing of queries without decompressing data needed to perform join operations in a database built on a triple store. The results of modelling the performance of the database with and without compression are given and compared with other recent work in this area. It is found that for some applications, gains in performance of over 50 are achievable, and in OLTP-like situations, there are also gains to be made." ] }
1103.4168
2113780638
One utilisation of multidimensional databases is the field of On-line Analytical Processing (OLAP). The applications in this area are designed to make the analysis of shared multidimensional information fast PendseB. On one hand, speed can be achieved by specially devised data structures and algorithms. On the other hand, the analytical process is cyclic. In other words, the user of the OLAP application runs his or her queries one after the other. The output of the last query may be there (at least partly) in one of the previous results. Therefore caching also plays an important role in the operation of these systems. However, caching itself may not be enough to ensure acceptable performance. Size does matter: The more memory is available, the more we gain by loading and keeping information in there. Oftentimes, the cache size is fixed. This limits the performance of the multidimensional database, as well, unless we compress the data in order to move a greater proportion of them into the memory. Caching combined with proper compression methods promise further performance improvements. In this paper, we investigate how caching influences the speed of OLAP systems. Different physical representations (multidimensional and table) are evaluated. For the thorough comparison, models are proposed. We draw conclusions based on these models, and the conclusions are verified with empirical data.
When we analyse algorithms that operate on data on the secondary storage, we usually investigate how many disk input output (I O) operations are performed. This is because we follow the rule @cite_19 . We followed a similar approach in Section below.
{ "cite_N": [ "@cite_19" ], "mid": [ "1565494300" ], "abstract": [ "From the Publisher: Three well-known computer scientists at Stanford University-Hector Garcia-Molina, Jeffrey D. Ullman, and Jennifer Widom-have written one of the most comprehensive books on database system implementation. Hector Garcia- Molina pioneered this book at Stanford as a second database systems course for computer science majors and industry-based professionals. It focuses on the implementation of database systems, including storage structures, query processing, and transaction management. Database System Implementation is valuable as an academic textbook or a professional reference. Noteworthy Features Provides extensive coverage of query processing, including major algorithms for execution of queries and techniques for optimizing queries Covers information integration, including warehousing and mediators, OLAP, and data-cube systems Explains error-correction in RAID disks and covers bitmap indexes, data mining, data statistics, and pointer swizzling Supports additional teaching materials found on the book's Web page at ..." ] }
1103.4168
2113780638
One utilisation of multidimensional databases is the field of On-line Analytical Processing (OLAP). The applications in this area are designed to make the analysis of shared multidimensional information fast PendseB. On one hand, speed can be achieved by specially devised data structures and algorithms. On the other hand, the analytical process is cyclic. In other words, the user of the OLAP application runs his or her queries one after the other. The output of the last query may be there (at least partly) in one of the previous results. Therefore caching also plays an important role in the operation of these systems. However, caching itself may not be enough to ensure acceptable performance. Size does matter: The more memory is available, the more we gain by loading and keeping information in there. Oftentimes, the cache size is fixed. This limits the performance of the multidimensional database, as well, unless we compress the data in order to move a greater proportion of them into the memory. Caching combined with proper compression methods promise further performance improvements. In this paper, we investigate how caching influences the speed of OLAP systems. Different physical representations (multidimensional and table) are evaluated. For the thorough comparison, models are proposed. We draw conclusions based on these models, and the conclusions are verified with empirical data.
The main focus of @cite_24 is the CPU cache. In our paper, we deal with the buffer cache as opposed to the CPU cache.
{ "cite_N": [ "@cite_24" ], "mid": [ "182451592" ], "abstract": [ "Computer systems have enjoyed an exponential growth in processor speed for the past 20 years, while main memory speed has improved only moderately. Today a cache miss to main memory takes hundreds of processor cycles. Recent studies have demonstrated that on commercial databases, about 50 or more of execution time in memory is often wasted due to cache misses. In light of this problem, a number of recent studies focused on reducing the number of cache misses of database algorithms. In this thesis, we investigate a different approach: reducing the impact of cache misses through a technique called cache prefetching. Since prefetching for sequential array accesses has been well studied, we are interested in studying non-contiguous access patterns found in two classes of database algorithms: the B+-Tree index algorithm and the hash join algorithm. We re-examine their designs with cache prefetching in mind, and combine prefetching and data locality optimizations to achieve good cache performance. For B+-Trees, we first propose and evaluate a novel main memory index structure, Prefetching B+Trees, which uses prefetching to accelerate two major access patterns of B+-Tree indices: searches and range scans. We then apply our findings in the development of a novel index structure, Fractal Prefetching B+-Trees, that optimizes index operations both for CPU cache performance and for disk performance in commercial database systems by intelligently embedding cache-optimized trees into disk pages. For hash joins, we first exploit cache prefetching separately for the I O partition phase and the join phase of the algorithm. We propose and evaluate two techniques, Group Prefetching and Software-Pipelined Prefetching, that exploit inter-tuple parallelism to overlap cache misses across the processing of multiple tuples. Then we present a novel algorithm, Inspector Joins, that exploits the free information obtained from one pass of the hash join algorithm to improve the performance of a later pass. This new algorithm addresses the memory bandwidth sharing problem in shared-bus multiprocessor systems. We compare our techniques against state-of-the-art cache-friendly algorithms for B+-Trees and hash joins through both simulation studies and real machine experiments. Our experimental results demonstrate dramatic performance benefits of our cache prefetching enabled techniques." ] }
1103.4168
2113780638
One utilisation of multidimensional databases is the field of On-line Analytical Processing (OLAP). The applications in this area are designed to make the analysis of shared multidimensional information fast PendseB. On one hand, speed can be achieved by specially devised data structures and algorithms. On the other hand, the analytical process is cyclic. In other words, the user of the OLAP application runs his or her queries one after the other. The output of the last query may be there (at least partly) in one of the previous results. Therefore caching also plays an important role in the operation of these systems. However, caching itself may not be enough to ensure acceptable performance. Size does matter: The more memory is available, the more we gain by loading and keeping information in there. Oftentimes, the cache size is fixed. This limits the performance of the multidimensional database, as well, unless we compress the data in order to move a greater proportion of them into the memory. Caching combined with proper compression methods promise further performance improvements. In this paper, we investigate how caching influences the speed of OLAP systems. Different physical representations (multidimensional and table) are evaluated. For the thorough comparison, models are proposed. We draw conclusions based on these models, and the conclusions are verified with empirical data.
Vitter @cite_5 describe an algorithm for prefetching based on compression techniques. Our paper supposes that the system does not read ahead.
{ "cite_N": [ "@cite_5" ], "mid": [ "2146104665" ], "abstract": [ "A form of the competitive philosophy is applied to the problem of prefetching to develop an optimal universal prefetcher in terms of fault ratio, with particular applications to large-scale databases and hypertext systems. The algorithms are novel in that they are based on data compression techniques that are both theoretically optimal and good in practice. Intuitively, in order to compress data effectively, one has to be able to predict feature data well, and thus good data compressors should be able to predict well for purposes of prefetching. It is shown for powerful models such as Markov sources and mth order Markov sources that the page fault rates incurred by the prefetching algorithms presented are optimal in the limit for almost all sequences of page accesses. >" ] }
1103.4168
2113780638
One utilisation of multidimensional databases is the field of On-line Analytical Processing (OLAP). The applications in this area are designed to make the analysis of shared multidimensional information fast PendseB. On one hand, speed can be achieved by specially devised data structures and algorithms. On the other hand, the analytical process is cyclic. In other words, the user of the OLAP application runs his or her queries one after the other. The output of the last query may be there (at least partly) in one of the previous results. Therefore caching also plays an important role in the operation of these systems. However, caching itself may not be enough to ensure acceptable performance. Size does matter: The more memory is available, the more we gain by loading and keeping information in there. Oftentimes, the cache size is fixed. This limits the performance of the multidimensional database, as well, unless we compress the data in order to move a greater proportion of them into the memory. Caching combined with proper compression methods promise further performance improvements. In this paper, we investigate how caching influences the speed of OLAP systems. Different physical representations (multidimensional and table) are evaluated. For the thorough comparison, models are proposed. We draw conclusions based on these models, and the conclusions are verified with empirical data.
Poess @cite_15 show how compression works in Oracle. They do not test the performance for different buffer cache sizes, which is an important issue in this paper.
{ "cite_N": [ "@cite_15" ], "mid": [ "2153084230" ], "abstract": [ "The Oracle RDBMS recently introduced an innovative compression technique for reducing the size of relational tables. By using a compression algorithm specifically designed for relational data, Oracle is able to compress data much more effectively than standard compression techniques. More significantly, unlike other compression techniques, Oracle incurs virtually no performance penalty for SQL queries accessing compressed tables. In fact, Oracle's compression may provide performance gains for queries accessing large amounts of data, as well as for certain data management operations like backup and recovery. Oracle's compression algorithm is particularly well-suited for data warehouses: environments, which contains large volumes of historical data, with heavy query workloads. Compression can enable a data warehouse to store several times more raw data without increasing the total disk storage or impacting query performance." ] }
1103.4168
2113780638
One utilisation of multidimensional databases is the field of On-line Analytical Processing (OLAP). The applications in this area are designed to make the analysis of shared multidimensional information fast PendseB. On one hand, speed can be achieved by specially devised data structures and algorithms. On the other hand, the analytical process is cyclic. In other words, the user of the OLAP application runs his or her queries one after the other. The output of the last query may be there (at least partly) in one of the previous results. Therefore caching also plays an important role in the operation of these systems. However, caching itself may not be enough to ensure acceptable performance. Size does matter: The more memory is available, the more we gain by loading and keeping information in there. Oftentimes, the cache size is fixed. This limits the performance of the multidimensional database, as well, unless we compress the data in order to move a greater proportion of them into the memory. Caching combined with proper compression methods promise further performance improvements. In this paper, we investigate how caching influences the speed of OLAP systems. Different physical representations (multidimensional and table) are evaluated. For the thorough comparison, models are proposed. We draw conclusions based on these models, and the conclusions are verified with empirical data.
In @cite_17 , Xi predict the buffer hit rate using a Markov chain model for a given buffer pool size. In our article, instead of the buffer hit rate, we estimate the expected number of pages brought into the memory from the disk, because it is proportional to the retrieval time. Another difference is that we usually start with a cold (that is empty) cache and investigate its increase together with the decrease in retrieval time. In @cite_17 , the authors fix the size of the buffer pool and then predict the buffer hit rate with the Markov chain model.
{ "cite_N": [ "@cite_17" ], "mid": [ "2044240774" ], "abstract": [ "Computing multiple related group-bys and aggregates is one of the core operations of On-Line Analytical Processing (OLAP) applications. Recently, [GBLP95] proposed the “Cube” operator, which computes group-by aggregations over all possible subsets of the specified dimensions. The rapid acceptance of the importance of this operator has led to a variant of the Cube being proposed for the SQL standard. Several efficient algorithms for Relational OLAP (ROLAP) have been developed to compute the Cube. However, to our knowledge there is nothing in the literature on how to compute the Cube for Multidimensional OLAP (MOLAP) systems, which store their data in sparse arrays rather than in tables. In this paper, we present a MOLAP algorithm to compute the Cube, and compare it to a leading ROLAP algorithm. The comparison between the two is interesting, since although they are computing the same function, one is value-based (the ROLAP algorithm) whereas the other is position-based (the MOLAP algorithm). Our tests show that, given appropriate compression techniques, the MOLAP algorithm is significantly faster than the ROLAP algorithm. In fact, the difference is so pronounced that this MOLAP algorithm may be useful for ROLAP systems as well as MOLAP systems, since in many cases, instead of cubing a table directly, it is faster to first convert the table to an array, cube the array, then convert the result back to a table." ] }
1103.3240
2019452753
We show that several important resource allocation problems in wireless networks fit within the common framework of constraint satisfaction problems (CSPs). Inspired by the requirements of these applications, where variables are located at distinct network devices that may not be able to communicate but may interfere, we define natural criteria that a CSP solver must possess in order to be practical. We term these algorithms decentralized CSP solvers. The best known CSP solvers were designed for centralized problems and do not meet these criteria. We introduce a stochastic decentralized CSP solver, proving that it will find a solution in almost surely finite time, should one exist, and also showing it has many practically desirable properties. We benchmark the algorithm's performance on a well-studied class of CSPs, random k-SAT, illustrating that the time the algorithm takes to find a satisfying assignment is competitive with stochastic centralized solvers on problems with order a thousand variables despite its decentralized nature. We demonstrate the solver's practical utility for the problems that motivated its introduction by using it to find a noninterfering channel allocation for a network formed from data from downtown Manhattan.
Algorithms developed from the DPLL approach have proved to be the quickest at SAT-Race and SAT Competition in recent years, e.g. ManySAT @cite_14 . The DPLL approach ultimately guarantees a complete search of the solution space and so meets the and criteria. They are, however, based on a branching rule methodology, e.g. @cite_25 , that assumes the existence of a centralized intelligence that employs a backtracking search. The implicit assumptions of the information available to this intelligence breaks the conditions and so they are not decentralized CSP solvers.
{ "cite_N": [ "@cite_14", "@cite_25" ], "mid": [ "2103406309", "1973734335" ], "abstract": [ "In this paper, ManySAT a new portfolio-based parallel SAT solver is thoroughly described. The design of ManySAT benefits from the main weaknesses of modern SAT solvers: their sensitivity to parameter tuning and their lack of robustness. ManySAT uses a portfolio of complementary sequential algorithms obtained through careful variations of the standard DPLL algorithm. Additionally, each sequential algorithm shares clauses to improve the overall performance of the whole system. This contrasts with most of the parallel SAT solvers generally designed using the divide-and-conquer paradigm. Experiments on many industrial SAT instances, and the first rank obtained by ManySAT in the parallel track of the 2008 SAT-Race clearly show the potential of our design philosophy.", "The Davis-Putnam-Logemann-Loveland algorithm is one of the most popular algorithms for solving the satisfiability problem. Its efficiency depends on its choice of a branching rule. We construct a sequence of instances of the satisfiability problem that fools a variety of sensible'''' branching rules in the following sense: when the instance has n variables, each of the sensible'''' branching rules brings about Omega(2^(n 5)) recursive calls of the Davis-Putnam-Logemann-Loveland algorithm, even though only O(1) such calls are necessary." ] }
1103.3240
2019452753
We show that several important resource allocation problems in wireless networks fit within the common framework of constraint satisfaction problems (CSPs). Inspired by the requirements of these applications, where variables are located at distinct network devices that may not be able to communicate but may interfere, we define natural criteria that a CSP solver must possess in order to be practical. We term these algorithms decentralized CSP solvers. The best known CSP solvers were designed for centralized problems and do not meet these criteria. We introduce a stochastic decentralized CSP solver, proving that it will find a solution in almost surely finite time, should one exist, and also showing it has many practically desirable properties. We benchmark the algorithm's performance on a well-studied class of CSPs, random k-SAT, illustrating that the time the algorithm takes to find a satisfying assignment is competitive with stochastic centralized solvers on problems with order a thousand variables despite its decentralized nature. We demonstrate the solver's practical utility for the problems that motivated its introduction by using it to find a noninterfering channel allocation for a network formed from data from downtown Manhattan.
Survey propagation, a development of belief propagation @cite_2 from trees to general graphs, has proved effective in graphs that do not contain small loops @cite_29 . For a given CSP, the fundamental structure of study is a called a factor graph. In order to generate this, it is necessary to know what clauses each variable participates in and the nature of each of the clauses, breaking the and criteria and so these are not decentralized CSP solvers.
{ "cite_N": [ "@cite_29", "@cite_2" ], "mid": [ "67367170", "2156094048" ], "abstract": [ "We study the problem of satisfiability of randomly chosen clauses, each with K Boolean variables. Using the cavity method at zero temperature, we find the phase diagram for the @math case. We show the existence of an intermediate phase in the satisfiable region, where the proliferation of metastable states is at the origin of the slowdown of search algorithms. The fundamental order parameter introduced in the cavity method, which consists of surveys of local magnetic fields in the various possible states of the system, can be computed for one given sample. These surveys can be used to invent new types of algorithms for solving hard combinatorial optimizations problems. One such algorithm is shown here for the @math satisfiability problem, with very good performances.", "This paper presents generalizations of Bayes likelihood-ratio updating rule which facilitate an asynchronous propagation of the impacts of new beliefs and or new evidence in hierarchically organized inference structures with multi-hypotheses variables. The computational scheme proposed specifies a set of belief parameters, communication messages and updating rules which guarantee that the diffusion of updated beliefs is accomplished in a single pass and complies with the tenets of Bayes calculus." ] }
1103.2246
1510675882
We present a formal proof of a time-triggered hardware interface. The design implements the bit-clock synchronization mechanism specified by the FlexRay standard for automotive embedded systems. The design is described at the gate-level. It can be translated to Verilog and synthesized on FPGA. The proof is based on a general model of asynchronous communications and combines interactive theorem proving in Isabelle HOL and automatic model-checking using NuSMV together with a model-reduction procedure, IHaVeIt. Our general model of asynchronous communications defines a clear separation between analog and digital concerns. This separation enables the combination of theorem proving and model-checking for an efficient methodology. The analog phenomena are formalized in the logic of Isabelle HOL. The gate-level hardware is automatically analyzed using IHaVeIt. Our proof reveals the correct values of a crucial parameter of the bit-clock synchronization mechanism. Our main theorem proves the functional correctness as well as the maximum number of cycles of the transmission.
The verification of analog and mixed signal (AMS) designs is a relatively young research field. A recent survey gave an overview of this emerging research area @cite_11 . The authors identify several successful applications of automatic techniques (equivalence checking, model checking, or run-time verification) in the context of AMS designs. Our work is more related to the last category identified in this survey, namely proof based methods. Hanna @cite_16 @cite_9 used predicates to approximate analog behaviors at the transistor level. The predicates can be embedded in digital proofs. His work is not specifically targeted to communication circuits and does not consider timing parameters, metastability or clock drift. We consider only gates and not their structure in terms of transistors. Recently, Al Sammane @cite_12 proposed a new symbolic verification methodology based on the computer algebra system Mathematica. This approach is based on a combination of induction and symbolic simulations. It is suitable to systems that can be described using discrete-time models. One contribution of our work is to combine discrete-time models with continuous time models.
{ "cite_N": [ "@cite_9", "@cite_16", "@cite_12", "@cite_11" ], "mid": [ "1522828591", "1568322431", "2144275718", "2116358561" ], "abstract": [ "An approach is described to the specification and verification of digital systems implemented wholly or partly at the analog level of abstraction. The approach relies upon specifying the behaviours of analog components (such as transistors) by piecewise-linear predicates on voltages and currents. A decision procedure is described that can, for a wide class of specifications, automatically establish the correctness of an implementation.", "This paper describes how to specify and reason about the properties of non-ideal logical circuitry in the analogue domain. Device behaviours are characterised by predicates over the voltage and current waveforms present at their ports. In many cases it suffices to formulate these predicates simply in terms of linear inequalities. The behavioural predicate for an overall circuit is obtained by taking the conjunction of the propositions satisfied by the individual components along with the constraints imposed by Kirchhoff's current law. As an illustration, the verification of a circuit for a Ttl not gate is outlined.", "The paper proposed a new symbolic verification methodology for proving the properties of analog and mixed signal (AMS) designs. Starting with an AMS description and a set of properties and using symbolic computation, a normal mathematical representation was extracted for the system in terms of recurrence equations. These normalized equations are used along with an induction verification strategy defined inside the computer algebra system Mathematica to prove the correctness of the properties. The methodology was applied on a third order DeltaSigma modulator", "Analog and mixed signal (AMS) designs are an important part of embedded systems that link digital designs to the analog world. Due to challenges associated with its verification process, AMS designs require a considerable portion of the total design cycle time. In contrast to digital designs, the verification of AMS systems is a challenging task that requires lots of expertise and deep understanding of their behavior. Researchers started lately studying the applicability of formal methods for the verification of AMS systems as a way to tackle the limitations of conventional verification methods like simulation. This paper surveys research activities in the formal verification of AMS designs as well as compares the different proposed approaches." ] }
1103.2520
2951927089
We consider a task of scheduling with a common deadline on a single machine. Every player reports to a scheduler the length of his job and the scheduler needs to finish as many jobs as possible by the deadline. For this simple problem, there is a truthful mechanism that achieves maximum welfare in dominant strategies. The new aspect of our work is that in our setting players are uncertain about their own job lengths, and hence are incapable of providing truthful reports (in the strict sense of the word). For a probabilistic model for uncertainty our main results are as follows. 1) Even with relatively little uncertainty, no mechanism can guarantee a constant fraction of the maximum welfare. 2) To remedy this situation, we introduce a new measure of economic efficiency, based on a notion of a fair share of a player, and design mechanisms that are @math -fair. In addition to its intrinsic appeal, our notion of fairness implies good approximation of maximum welfare in several cases of interest. 3) In our mechanisms the machine is sometimes left idle even though there are jobs that want to use it. We show that this unfavorable aspect is unavoidable, unless one gives up other favorable aspects (e.g., give up @math -fairness). We also consider a qualitative approach to uncertainty as an alternative to the probabilistic quantitative model. In the qualitative approach we break away from solution concepts such as dominant strategies (they are no longer well defined), and instead suggest an axiomatic approach, which amounts to listing desirable properties for mechanisms. We provide a mechanism that satisfies these properties.
A relatively small part of the literature on scheduling in game theoretic settings deals with deadlines. Porter @cite_21 studies a problem of online scheduling of jobs on a single processor. Each job is characterized by a release time, a deadline, a processing time, and a value for successful completion by its deadline. Monetary transfers are used in order to lead agents to a relatively efficient behavior. On the other hand the work by Cres and Moulin @cite_7 deals with a scheduling domain, where deadlines (rather than job lengths) are private information, and compares two scheduling policies that do not involve money. As in all other work on scheduling that we are aware of, a player does not have uncertainty about his private parameters.
{ "cite_N": [ "@cite_21", "@cite_7" ], "mid": [ "2153817930", "2115231948" ], "abstract": [ "For the problem of online real-time scheduling of jobs on a single processor, previous work presents matching upper and lower bounds on the competitive ratio that can be achieved by a deterministic algorithm. However, these results only apply to the non-strategic setting in which the jobs are released directly to the algorithm. Motivated by emerging areas such as grid computing, we instead consider this problem in an economic setting, in which each job is released to a separate, self-interested agent. The agent can then delay releasing the job to the algorithm, inflate its length, and declare an arbitrary value and deadline for the job, while the center determines not only the schedule, but the payment of each agent. For the resulting mechanism design problem (in which we also slightly strengthen an assumption from the non-strategic setting), we present a mechanism that addresses each incentive issue, while only increasing the competitive ratio by one. We then show a matching lower bound for deterministic mechanisms that never pay the agents.", "In a scheduling problem where agents can opt out, we show that the familiar random priority RP mechanism can be improved upon by another mechanism dubbed probabilistic serial PS. Both mechanisms are nonmanipulable in a strong sense, but the latter is Pareto superior to the former and serves a larger expected number of agents. The PS equilibrium outcome is easier to compute than the RP outcome; on the other hand, RP is easier to implement than PS. We show that the improvement of PS over RP is significant but small: at most a couple of percentage points in the relative welfare gain and the relative difference in quantity served. Both gains vanish when the number of agents is large; hence both mechanisms can be used as a proxy of each other." ] }
1103.2520
2951927089
We consider a task of scheduling with a common deadline on a single machine. Every player reports to a scheduler the length of his job and the scheduler needs to finish as many jobs as possible by the deadline. For this simple problem, there is a truthful mechanism that achieves maximum welfare in dominant strategies. The new aspect of our work is that in our setting players are uncertain about their own job lengths, and hence are incapable of providing truthful reports (in the strict sense of the word). For a probabilistic model for uncertainty our main results are as follows. 1) Even with relatively little uncertainty, no mechanism can guarantee a constant fraction of the maximum welfare. 2) To remedy this situation, we introduce a new measure of economic efficiency, based on a notion of a fair share of a player, and design mechanisms that are @math -fair. In addition to its intrinsic appeal, our notion of fairness implies good approximation of maximum welfare in several cases of interest. 3) In our mechanisms the machine is sometimes left idle even though there are jobs that want to use it. We show that this unfavorable aspect is unavoidable, unless one gives up other favorable aspects (e.g., give up @math -fairness). We also consider a qualitative approach to uncertainty as an alternative to the probabilistic quantitative model. In the qualitative approach we break away from solution concepts such as dominant strategies (they are no longer well defined), and instead suggest an axiomatic approach, which amounts to listing desirable properties for mechanisms. We provide a mechanism that satisfies these properties.
Fairness: As mentioned, our notion of fair-share is related to the literature on fair division @cite_15 . However, it is different from maxmin fairness typically discussed in the CS literature (e.g. @cite_11 ), and from the idea of proportional utility advocated in the literature (see @cite_15 for this and related concepts and discussions). In particular, our fair-share notion offers better guarantees to some players than to others. As noted earlier, our inspiration for this notion did not come from the fairness literature, but from literature on approximation algorithms @cite_17 .
{ "cite_N": [ "@cite_15", "@cite_17", "@cite_11" ], "mid": [ "", "2078040677", "1978593916" ], "abstract": [ "", "This paper analyzes the problem of inducing the members of an organization to behave as if they formed a team. Considered is a conglomerate-type organization consisting of a set of semi-autonomous subunits that are coordinated by the organization's head. The head's incentive problem is to choose a set of employee compensation rules that will induce his subunit managers to communicate accurate information and take optimal decisions. The main result exhibits a particular set of compensation rules, an optimal incentive structure, that leads to team behavior. Particular attention is directed to the informational aspects of the problem. An extended example of a resource allocation model is discussed and the optimal incentive structure is interpreted in terms of prices charged by the head for resources allocated to the subunits.", "We consider the following problem: The Santa Claus has n presents that he wants to distribute among m kids. Each kid has an arbitrary value for each present. Let pij be the value that kid i has for present j. The Santa's goal is to distribute presents in such a way that the least lucky kid is as happy as possible, i.e he tries to maximize mini=1,...,m sumj ∈ Si pij where Si is a set of presents received by the i-th kid.Our main result is an O(log log m log log log m) approximation algorithm for the restricted assignment case of the problem when pij ∈ pj,0 (i.e. when present j has either value pj or 0 for each kid). Our algorithm is based on rounding a certain natural exponentially large linear programming relaxation usually referred to as the configuration LP. We also show that the configuration LP has an integrality gap of Ω(m1 2) in the general case, when pij can be arbitrary." ] }
1103.2520
2951927089
We consider a task of scheduling with a common deadline on a single machine. Every player reports to a scheduler the length of his job and the scheduler needs to finish as many jobs as possible by the deadline. For this simple problem, there is a truthful mechanism that achieves maximum welfare in dominant strategies. The new aspect of our work is that in our setting players are uncertain about their own job lengths, and hence are incapable of providing truthful reports (in the strict sense of the word). For a probabilistic model for uncertainty our main results are as follows. 1) Even with relatively little uncertainty, no mechanism can guarantee a constant fraction of the maximum welfare. 2) To remedy this situation, we introduce a new measure of economic efficiency, based on a notion of a fair share of a player, and design mechanisms that are @math -fair. In addition to its intrinsic appeal, our notion of fairness implies good approximation of maximum welfare in several cases of interest. 3) In our mechanisms the machine is sometimes left idle even though there are jobs that want to use it. We show that this unfavorable aspect is unavoidable, unless one gives up other favorable aspects (e.g., give up @math -fairness). We also consider a qualitative approach to uncertainty as an alternative to the probabilistic quantitative model. In the qualitative approach we break away from solution concepts such as dominant strategies (they are no longer well defined), and instead suggest an axiomatic approach, which amounts to listing desirable properties for mechanisms. We provide a mechanism that satisfies these properties.
Forgiveness: Forgiveness is an important concept in social studies. It has also been recently adopted in designing computational systems, such as reputation systems @cite_20 . Another line of research that introduces a notion of forgiveness is the study of evolution of cooperation @cite_14 ; indeed, the famous Tit-for-Tat strategy can be viewed as employing proportional punishment and forgiveness. While game theory does not deal explicitly with forgiveness, some classical solution concepts do incorporate a possibility of error on behalf of players. A well known example is that of trembling hand perfect equilibrium (see @cite_8 ). It defines rational behavior of a player as a limit to which best responses converge once the probability of accidental" irrational behavior on behalf of other players tends to 0.
{ "cite_N": [ "@cite_14", "@cite_20", "@cite_8" ], "mid": [ "2062663664", "", "1998191601" ], "abstract": [ "Cooperation in organisms, whether bacteria or primates, has been a difficulty for evolutionary theory since Darwin. On the assumption that interactions between pairs of individuals occur on a probabilistic basis, a model is developed based on the concept of an evolutionarily stable strategy in the context of the Prisoner's Dilemma game. Deductions from the model, and the results of a computer tournament show how cooperation based on reciprocity can get started in an asocial world, can thrive while interacting with a wide range of other strategies, and can resist invasion once fully established. Potential applications include specific aspects of territoriality, mating, and disease.", "", "The concept of a perfect equilibrium point has been introduced in order to exclude the possibility that disequilibrium behavior is prescribed on unreached subgames. (Selten 1965 and 1973). Unfortunately this definition of perfectness does not remove all difficulties which may arise with respect to unreached parts of the game. It is necessary to reexamine the problem of defining a satisfactory non-cooperative equilibrium concept for games in extensive form. Therefore a new concept of a perfect equilibrium point will be introduced in this paper. In retrospect the earlier use of the word \"perfect\" was premature. Therefore a perfect equilibrium point in the old Sense will be called \"subgame perfect\". The new definition of perfectness has the property that a perfect equilibrium point is always subgame perfect but a subgame perfect equilibrium point may not be perfect. It will be shown that every finite extensive game with perfect recall has at least one perfect equilibrium point. Since subgame perfectness cannot be detected in the normal form, it is clear that for the purpose of the investigation of the problem of perfectness, the normal form is an inadequate representation of the extensive form. It will be convenient to introduce an \"agent normal form\" as a more adequate representation of games with perfect recall." ] }
1103.2575
1978748826
For a polyhedron @math P let @math B(P) denote the polytopal complex that is formed by all bounded faces of @math P. If @math P is the intersection of @math n halfspaces in @math RD, but the maximum dimension @math d of any face in @math B(P) is much smaller, we show that the combinatorial complexity of @math P cannot be too high; in particular, that it is independent of @math D. We show that the number of vertices of @math P is @math O(nd) and the total number of bounded faces of the polyhedron is @math O(nd2). For inputs in general position the number of bounded faces is @math O(nd). We show that for certain specific values of @math d and @math D, our bounds are tight. For any fixed @math d, we show how to compute the set of all vertices, how to determine the maximum dimension of a bounded face of the polyhedron, and how to compute the set of bounded faces in polynomial time, by solving a number of linear programs that is polynomial in @math n.
One motivation for studying bounded subcomplexes comes from the construction, a canonical method of embedding any metric space into a continuous space with properties similar to those of @math spaces @cite_23 @cite_11 @cite_9 . One way of defining the tight span, for a finite metric space with @math points @math , is to coordinatize @math -dimensional @math space by @math variables @math , and to define a polyhedral subset of the space by the @math linear inequalities [ x_i+x_j dist (p_i,p_j) ] for each possible pair @math . Then, the tight span is the bounded subcomplex of this polyhedron. For metric spaces satisfying an appropriate general position assumption, the dimension of the bounded subcomplex is between @math and @math @cite_10 , but certain combinatorially defined metrics, such as the metrics of distances on certain classes of planar graphs, can have tight spans of much lower dimension @cite_20 . Our results bound the complexity of these low-dimensional tight spans and allow them to be constructed efficiently, generalizing our previous algorithms for constructing tight spans when they are homeomorphic to subsets of the plane @cite_16 .
{ "cite_N": [ "@cite_9", "@cite_23", "@cite_16", "@cite_10", "@cite_20", "@cite_11" ], "mid": [ "2061396714", "1966496924", "1673825429", "2020041297", "", "2004079181" ], "abstract": [ "Oxyalkylated condensation products prepared from aliphatic and aromatic amines, epihalohydrin and alkylene oxides are used in the preparation of polyurethane foams.", "We propose a new algorithm, called Equipoise, for the k-server problem, and we prove that it is two-competitive for two servers and 11-competitive for three servers. For k = 3, this is a substantial improvement over previously known constants. The algorithm uses several general techniques - convex hulls, work functions, and forgiveness.", "We describe a data structure, a rectangular complex, that can be used represent hyperconvex metric spaces that have the same topology (although not necessarily the same distance function) as subsets of the plane. We show how to use this data structure to construct the tight span of a metric space given as an n × n distance matrix, when the tight span is homeomorphic to a subset of the plane, in time O ( n 2 ), and to add a single point to a planar tight span in time O ( n ). As an application of this construction, we show how to test whether a given finite metric space embeds isometrically into the Manhattan plane in time O( n 2 ), and add a single point to the space and re-test whether it has such an embedding in time O ( n ).", "Given a finite metric, one can construct its tight span, a geometric object representing the metric. The dimension of a tight span encodes, among other things, the size of the space of explanatory trees for that metric; for instance, if the metric is a tree metric, the dimension of the tight span is one. We show that the dimension of the tight span of a generic metric is between @math and @math that both bounds are tight.", "", "Abstract The concept of tight extensions of a metric space is introduced, the existence of an essentially unique maximal tight extension T x —the “tight span,” being an abstract analogon of the convex hull—is established for any given metric space X and its properties are studied. Applications with respect to (1) the existence of embeddings of a metric space into trees, (2) optimal graphs realizing a metric space, and (3) the cohomological dimension of groups with specific length functions are discussed." ] }
1103.2575
1978748826
For a polyhedron @math P let @math B(P) denote the polytopal complex that is formed by all bounded faces of @math P. If @math P is the intersection of @math n halfspaces in @math RD, but the maximum dimension @math d of any face in @math B(P) is much smaller, we show that the combinatorial complexity of @math P cannot be too high; in particular, that it is independent of @math D. We show that the number of vertices of @math P is @math O(nd) and the total number of bounded faces of the polyhedron is @math O(nd2). For inputs in general position the number of bounded faces is @math O(nd). We show that for certain specific values of @math d and @math D, our bounds are tight. For any fixed @math d, we show how to compute the set of all vertices, how to determine the maximum dimension of a bounded face of the polyhedron, and how to compute the set of bounded faces in polynomial time, by solving a number of linear programs that is polynomial in @math n.
Our results can also be interpreted as statements about the complexity of Delaunay triangulations for inputs satisfying strong convex position assumptions. Delaunay triangulations are closely related to convex hulls: the Delaunay triangulation is combinatorially equivalent to the convex hull of a point set lifted to a sphere in one higher dimension, augmented by an extra point at the pole of the sphere, followed by the removal of all faces incident to that pole @cite_3 . If a @math -dimensional set of @math points has the property that every interior point of the convex hull of the set belongs to a Delaunay triangulation feature of dimension at least @math , then our results imply via this lifting relation that the Delaunay triangulation has @math @math -dimensional simplices. For instance, if the convex hull of a point set is a stacked polytope (a convex figure formed by gluing simplices facet-to-facet) and the Delaunay triangulation coincides with the gluing pattern of the simplices, then every interior point of the convex hull belongs either to one of the simplices or to one of the glued facets, so @math ; in this case, the number of simplices in the Delaunay triangulation is exactly @math .
{ "cite_N": [ "@cite_3" ], "mid": [ "1984406695" ], "abstract": [ "The problem of construction of planar Voronoi diagrams arises in many areas, one of the most important of which is in nearest neighbor problems. This includes clustering [ 141, contour maps [6] and (Euclidean) minimum spanning trees [23]. Shamos [22] gives several more applications. An JZ(N log N) time worst case lower bound can be shown for this problem by reducing it to sorting [2 11. The challenge is to construct an O(N log N) time algorithm. Shamos [213 and Shamos anti Hoey [23] describe an O(N log N) time divide-and-conquer algorithm for construction of the planar Euclidean Voronoi diagram. Lee and Wong [ 161 describe an O(N log N) time algorithm for the L1 and L, metrics in the plane, and Drysdale pnd Lee [8] present an O(N@g N)l *) t’ rme algorithm for the Voronoi diagram of N line segments (which they have since improved to O(N(log N)*) time). Shamos [2 11, Lee and Preparata [ 151, and Lipton and Tarjan [ 171 have produced fast algorithms for searching a Voronoi diagram (or any other straight-line planar graph). In this paper we describe an O(N log N) time algorithm for constructing a planar Euclidean Voronoi diagram which extends straightforwardly to higher dimensions. The fundamental result is that a K-dimensional Euclidean Voronoi diagram of N points can be constructed by transforming the points to K + I-space," ] }
1103.2575
1978748826
For a polyhedron @math P let @math B(P) denote the polytopal complex that is formed by all bounded faces of @math P. If @math P is the intersection of @math n halfspaces in @math RD, but the maximum dimension @math d of any face in @math B(P) is much smaller, we show that the combinatorial complexity of @math P cannot be too high; in particular, that it is independent of @math D. We show that the number of vertices of @math P is @math O(nd) and the total number of bounded faces of the polyhedron is @math O(nd2). For inputs in general position the number of bounded faces is @math O(nd). We show that for certain specific values of @math d and @math D, our bounds are tight. For any fixed @math d, we show how to compute the set of all vertices, how to determine the maximum dimension of a bounded face of the polyhedron, and how to compute the set of bounded faces in polynomial time, by solving a number of linear programs that is polynomial in @math n.
Bounded subcomplexes have also been investigated in other contexts. In tropical geometry, the defined by a set of points is also equivalent to the set of bounded faces of a polytope defined from the points @cite_8 . Develin @cite_15 studied the bounded faces of a certain polytope arising from a problem in algebra, and showed that they are all isomorphic to subpolytopes of permutohedra. Queyranne @cite_4 used polytopes to model scheduling problems; the polytope defined by Queyranne has a unique bounded facet. In connection with the tight span application, Hirai @cite_29 showed that the bounded faces of a halfspace intersection form a contractable complex. @cite_14 consider the problem of computing the bounded subcomplex of a polyhedron, but they do not bound the complexity of the complex. Their algorithms assume that the vertices and facets of the polyhedron are both already known, and are output-sensitive given this information.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_8", "@cite_29", "@cite_15" ], "mid": [ "2148675558", "2075265959", "", "2145159971", "1968602707" ], "abstract": [ "We study efficient combinatorial algorithms to produce the Hasse diagram of the poset of bounded faces of an unbounded polyhedron, given vertex-facet incidences. We also discuss the special case of simple polyhedra and present computational results.", "In a one-machine nonpreemptive scheduling problem, the feasible schedules may be defined by the vector of the corresponding job completion times. For given positive processing times, the associated simple scheduling polyhedronP is the convex hull of these feasible completion time vectors. The main result of this paper is a complete description of the minimal linear system definingP. We also give a complete, combinatorial description of the face lattice ofP, and a simple, O(n logn) separation algorithm. This algorithm has potential usefulness in cutting plane type algorithms for more difficult scheduling problems.", "", "A characterization is given to the distance between subtrees of a tree defined as the shortest path length between subtrees. This is a generalization of the four-point condition for tree metrics. For this, we use the theory of the tight span and obtain an extension of the famous result by Dress that a metric is a tree metric if and only if its tight span is a tree.", "Given a monomial k[x1,. . . ,xn]-module M in the Laurent polynomial ring k[x1±1, . . . , xn±1], the hull complex is defined to be the set of bounded faces of the convex hull of the points ta| xa ∈ M for sufficiently large t. Bayer and Sturmfels conjectured that the faces of this polyhedron are of bounded complexity in the sense that every such face is affinely isomorphic to a subpolytope of the (n – 1)-dimensional permutohedron, which in particular would imply that these faces have at most n! vertices. In this paper we prove that the latter statement is true, and give a counterexample to the stronger conjecture." ] }
1103.2404
2950292824
Detecting misbehavior (such as transmissions of false information) in vehicular ad hoc networks (VANETs) is very important problem with wide range of implications including safety related and congestion avoidance applications. We discuss several limitations of existing misbehavior detection schemes (MDS) designed for VANETs. Most MDS are concerned with detection of malicious nodes. In most situations, vehicles would send wrong information because of selfish reasons of their owners, e.g. for gaining access to a particular lane. Because of this (), it is more important to detect false information than to identify misbehaving nodes. We introduce the concept of data-centric misbehavior detection and propose algorithms which detect false alert messages and misbehaving nodes by observing their actions after sending out the alert messages. With the data-centric MDS, each node can independently decide whether an information received is correct or false. The decision is based on the consistency of recent messages and new alert with reported and estimated vehicle positions. No voting or majority decisions is needed, making our MDS resilient to Sybil attacks. Instead of revoking all the secret credentials of misbehaving nodes, as done in most schemes, we impose fines on misbehaving nodes (administered by the certification authority), discouraging them to act selfishly. This reduces the computation and communication costs involved in revoking all the secret credentials of misbehaving nodes.
The pseudonyms are generated in a way that the identity of the node cannot be obtained from the pseudonyms. A vehicle can also have multiple public private key pairs, corresponding to each pseudonym. This concept of was introduced by @cite_23 and has gained a lot of attention. Pseudonyms were used in authentication in @cite_24 . @cite_14 used a hybrid scheme using pseudonyms and group signatures for authentication.
{ "cite_N": [ "@cite_24", "@cite_14", "@cite_23" ], "mid": [ "2121247918", "", "2137469879" ], "abstract": [ "Vehicular networks are very likely to be deployed in the coming years and thus become the most relevant form of mobile ad hoc networks. In this paper, we address the security of these networks. We provide a detailed threat analysis and devise an appropriate security architecture. We also describe some major design decisions still to be made, which in some cases have more than mere technical implications. We provide a set of security protocols, we show that they protect privacy and we analyze their robustness and efficiency.", "", "Road safety, traffic management, and driver convenience continue to improve, in large part thanks to appropriate usage of information technology. But this evolution has deep implications for security and privacy, which the research community has overlooked so far." ] }
1103.2635
2225787779
We develop methods for accelerating metric similarity search that are effective on modern hardware. Our algorithms factor into easily parallelizable components, making them simple to deploy and efficient on multicore CPUs and GPUs. Despite the simple structure of our algorithms, their search performance is provably sub linear in the size of the database, with a factor dependent only on its intrinsic dimensionality. We demonstrate that our methods provide substantial speedups on a range of datasets and hardware platforms. In particular, we present results on a 48-core server machine, on graphics hardware, and on a multicore desktop.
The data structure and algorithm in this work are based on two fundamentals of algorithms for metric data: space decomposition and the triangle inequality. These pillars are used in virtually all work on metric NN search; see the surveys of Ch ' a and Clarkson for detailed overviews @cite_17 @cite_7 . Two of the most empirically effective structures are AESA @cite_24 and metric ball trees @cite_27 @cite_12 , both of which have spawned many relatives.
{ "cite_N": [ "@cite_7", "@cite_24", "@cite_27", "@cite_12", "@cite_17" ], "mid": [ "139098497", "2096635897", "1496508106", "", "2038044292" ], "abstract": [ "Given a set S of points in a metric space with distance function D, the nearest-neighbor searching problem is to build a data structure for S so that for an input query point q, the point s 2 S that minimizes D(s,q) can be found quickly. We survey approaches to this problem, and its relation to concepts of metric space dimension. Several measures of dimension can be estimated using nearest-neighbor searching, while others can be used to estimate the cost of that searching. In recent years, several data structures have been proposed that are provably good for low-dimensional spaces, for some particular measures of dimension. These and other data structures for nearest-neighbor searching are surveyed.", "Abstract A new algorithm is proposed which finds the Nearest Neighbour of a given sample in approximately constant average time complexity (i.e. independent of the data set size). The algorithm does not assume the data to be structured into any vector space, and only makes use of the metric properties of the given distance, thus being of general use in many present applications of Pattern Recognition. Simulation results for different sizes, metrics, and dimensions, show that the average number of distance computations is less than 4 in a 2-dimensional space, and less than 60 in 10 dimensions. These results are obtained at the expense of a quadratic space complexity and, for data-set sizes over 1000 samples, represents a time complexity improvement of at least one order of magnitude over the best results reported until now for the same task.", "Balltrees are simple geometric data structures with a wide range of practical applica­ tions to geometric ·learning tasks. In this report we compare 5 different algorithms for . constructing ball trees from data. We study the trade-off between construction time and the quality of the constructed tree. Two of the algorithms are on-line, two construct the structures from the data set in a top down fashion, and one uses a bottom up approach. We empirically study the algorithms on random data drawn from eight different probability distributions representing smooth, clustered, and curve distributed data in different ambient space dimen­ sions. We find that the bottom up approach usually produces the best trees but has the longest construction time. The other approaches have uses in specific circumstances. 1. IntemauonaI.Computer Science Institute, Berkeley, CA.", "", "The problem of searching the elements of a set that are close to a given query element under some similarity criterion has a vast number of applications in many branches of computer science, from pattern recognition to textual and multimedia information retrieval. We are interested in the rather general case where the similarity criterion defines a metric space, instead of the more restricted case of a vector space. Many solutions have been proposed in different areas, in many cases without cross-knowledge. Because of this, the same ideas have been reconceived several times, and very different presentations have been given for the same approaches. We present some basic results that explain the intrinsic difficulty of the search problem. This includes a quantitative definition of the elusive concept of \"intrinsic dimensionality.\" We also present a unified view of all the known proposals to organize metric spaces, so as to be able to understand them under a common framework. Most approaches turn out to be variations on a few different concepts. We organize those works in a taxonomy that allows us to devise new algorithms from combinations of concepts not noticed before because of the lack of communication between different communities. We present experiments validating our results and comparing the existing approaches. We finish with recommendations for practitioners and open questions for future development." ] }
1103.2635
2225787779
We develop methods for accelerating metric similarity search that are effective on modern hardware. Our algorithms factor into easily parallelizable components, making them simple to deploy and efficient on multicore CPUs and GPUs. Despite the simple structure of our algorithms, their search performance is provably sub linear in the size of the database, with a factor dependent only on its intrinsic dimensionality. We demonstrate that our methods provide substantial speedups on a range of datasets and hardware platforms. In particular, we present results on a 48-core server machine, on graphics hardware, and on a multicore desktop.
A long-standing problem in similarity search is the difficulty of dealing with high-dimensional data; see @cite_4 @cite_0 @cite_23 and the above surveys. The basic challenge is that space-decomposition structures that reduce the work for NN retrieval seem to have performance that scales exponentially with the dimensionality of the data, rendering them useless to all but the smallest problems. Within the last two decades there have been two very promising directions of work that attempt to deal with the problem of high-dimensional data.
{ "cite_N": [ "@cite_0", "@cite_4", "@cite_23" ], "mid": [ "1554174647", "2169351022", "1541459201" ], "abstract": [ "Given user data, one often wants to find approximate matches in a large database. A good example of such a task is finding images similar to a given image in a large collection of images. We focus on the important and technically diffcult case where each data element is high dimensional, or more generally, is represented by a point in a large metric spaceand distance calculations are computationally expensive. In this paper we introduce a data structure to solve this problem called a GNAT Geometric Near-neighbor Access Tree. It is based on the philosophy that the data structure should act as a hierarchical geometrical model of the data as opposed to a simple decomposition of the data that does not use its intrinsic geometry. In experiments, we find that GNAT's outperform previous data structures in a number of applications. Keywords near neighbor, metric space, approximate queries, data mining, Dirichlet domains, Voronoi regions", "", "For similarity search in high-dimensional vector spaces (or ‘HDVSs’), researchers have proposed a number of new methods (or adaptations of existing methods) based, in the main, on data-space partitioning. However, the performance of these methods generally degrades as dimensionality increases. Although this phenomenon-known as the ‘dimensional curse’-is well known, little or no quantitative a.nalysis of the phenomenon is available. In this paper, we provide a detailed analysis of partitioning and clustering techniques for similarity search in HDVSs. We show formally that these methods exhibit linear complexity at high dimensionality, and that existing methods are outperformed on average by a simple sequential scan if the number of dimensions exceeds around 10. Consequently, we come up with an alternative organization based on approximations to make the unavoidable sequential scan as fast as possible. We describe a simple vector approximation scheme, called VA-file, and report on an experimental evaluation of this and of two tree-based index methods (an R*-tree and an X-tree)." ] }
1103.2635
2225787779
We develop methods for accelerating metric similarity search that are effective on modern hardware. Our algorithms factor into easily parallelizable components, making them simple to deploy and efficient on multicore CPUs and GPUs. Despite the simple structure of our algorithms, their search performance is provably sub linear in the size of the database, with a factor dependent only on its intrinsic dimensionality. We demonstrate that our methods provide substantial speedups on a range of datasets and hardware platforms. In particular, we present results on a 48-core server machine, on graphics hardware, and on a multicore desktop.
The first is called Locality Sensitive Hashing (LSH) @cite_19 . LSH has retrieval performance that is provably sublinear, independent of the underlying dimensionality. This was a major theoretical breakthrough and the data structure has been successfully deployed on some tasks (e.g. @cite_10 ). However, LSH has some limitations: it can only provide approximate answers, it is defined only for particular distance measures (not at the generality of metrics), and setting the parameters correctly can be complex @cite_22 .
{ "cite_N": [ "@cite_19", "@cite_10", "@cite_22" ], "mid": [ "2147717514", "2109034006", "2055839530" ], "abstract": [ "We present two algorithms for the approximate nearest neighbor problem in high-dimensional spaces. For data sets of size n living in R d , the algorithms require space that is only polynomial in n and d, while achieving query times that are sub-linear in n and polynomial in d. We also show applications to other high-dimensional geometric problems, such as the approximate minimum spanning tree. The article is based on the material from the authors' STOC'98 and FOCS'01 papers. It unifies, generalizes and simplifies the results from those papers.", "Motivation: Comparison of multimegabase genomic DNA sequences is a popular technique for finding and annotating conserved genome features. Performing such comparisons entails finding many short local alignments between sequences up to tens of megabases in length. To process such long sequences efficiently, existing algorithms find alignments by expanding around short runs of matching bases with no substitutions or other differences. Unfortunately, exact matches that are short enough to occur often in significant alignments also occur frequently by chance in the background sequence. Thus, these algorithms must trade off between efficiency and sensitivity to features without long exact matches. Results: We introduce a new algorithm, LSH-ALL-PAIRS, to find ungapped local alignments in genomic sequence with up to a specified fraction of substitutions. The length and substitution rate of these alignments can be chosen so that they appear frequently in significant similarities yet still remain rare in the background sequence. The algorithm finds ungapped alignments efficiently using a randomized search technique, locality-sensitive hashing. We have found LSH-ALL-PAIRS to be both efficient and sensitive for finding local similarities with as little as 63 identity in mammalian genomic sequences up to tens of megabases in length Availability: Contact the author at the address below.", "Although Locality-Sensitive Hashing (LSH) is a promising approach to similarity search in high-dimensional spaces, it has not been considered practical partly because its search quality is sensitive to several parameters that are quite data dependent. Previous research on LSH, though obtained interesting asymptotic results, provides little guidance on how these parameters should be chosen, and tuning parameters for a given dataset remains a tedious process. To address this problem, we present a statistical performance model of Multi-probe LSH, a state-of-the-art variance of LSH. Our model can accurately predict the average search quality and latency given a small sample dataset. Apart from automatic parameter tuning with the performance model, we also use the model to devise an adaptive LSH search algorithm to determine the probing parameter dynamically for each query. The adaptive probing method addresses the problem that even though the average performance is tuned for optimal, the variance of the performance is extremely high. We experimented with three different datasets including audio, images and 3D shapes to evaluate our methods. The results show the accuracy of the proposed model: the recall errors predicted are within 5 from the real values for most cases; the adaptive search method reduces the standard deviation of recall by about 50 over the existing method." ] }
1103.2635
2225787779
We develop methods for accelerating metric similarity search that are effective on modern hardware. Our algorithms factor into easily parallelizable components, making them simple to deploy and efficient on multicore CPUs and GPUs. Despite the simple structure of our algorithms, their search performance is provably sub linear in the size of the database, with a factor dependent only on its intrinsic dimensionality. We demonstrate that our methods provide substantial speedups on a range of datasets and hardware platforms. In particular, we present results on a 48-core server machine, on graphics hardware, and on a multicore desktop.
The second line of work, upon which we build, is based on the notion of . The basic idea here is that many data sets only appear high-dimensional, but are actually governed by a small number of parameters. Within data analysis and machine learning, the idea of low-dimensional intrinsic structure has become extremely popular and such structure is believed to be common in many data sets of interests @cite_6 @cite_3 .
{ "cite_N": [ "@cite_3", "@cite_6" ], "mid": [ "2001141328", "2053186076" ], "abstract": [ "Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs—30,000 auditory nerve fibers or 106 optic nerve fibers—a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure.", "Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text. How do we judge similarity? Our mental representations of the world are formed by processing large numbers of sensory in" ] }
1103.2635
2225787779
We develop methods for accelerating metric similarity search that are effective on modern hardware. Our algorithms factor into easily parallelizable components, making them simple to deploy and efficient on multicore CPUs and GPUs. Despite the simple structure of our algorithms, their search performance is provably sub linear in the size of the database, with a factor dependent only on its intrinsic dimensionality. We demonstrate that our methods provide substantial speedups on a range of datasets and hardware platforms. In particular, we present results on a 48-core server machine, on graphics hardware, and on a multicore desktop.
This idea has also been explored in the context of NN search. A variety of slightly different notions of metric space dimensionality capture this intuition. One which has recently resulted in strong theoretical and empirical results is the or @cite_15 @cite_28 , which we define formally later. This notion of dimension lead to the development of the Cover Tree @cite_16 @cite_26 , which we return to momentarily. Though the notion has some idiosyncrasies @cite_26 , the impressive empirical performance of the Cover Tree suggests that it is a useful notion.
{ "cite_N": [ "@cite_28", "@cite_15", "@cite_16", "@cite_26" ], "mid": [ "2169036209", "1983067644", "2133296809", "2034188144" ], "abstract": [ "Most research on nearest neighbor algorithms in the literature has been focused on the Euclidean case. In many practical search problems however, the underlying metric is non-Euclidean. Nearest neighbor algorithms for general metric spaces are quite weak, which motivates a search for other classes of metric spaces that can be tractably searched.In this paper, we develop an efficient dynamic data structure for nearest neighbor queries in growth-constrained metrics. These metrics satisfy the property that for any point q and number r the ratio between numbers of points in balls of radius 2r and r is bounded by a constant. Spaces of this kind may occur in networking applications, such as the Internet or Peer-to-peer networks, and vector quantization applications, where feature vectors fall into low-dimensional manifolds within high-dimensional vector spaces.", "", "We present a tree data structure for fast nearest neighbor operations in general n-point metric spaces (where the data set consists of n points). The data structure requires O(n) space regardless of the metric's structure yet maintains all performance properties of a navigating net (Krauthgamer & Lee, 2004b). If the point set has a bounded expansion constant c, which is a measure of the intrinsic dimensionality, as defined in (Karger & Ruhl, 2002), the cover tree data structure can be constructed in O (c6n log n) time. Furthermore, nearest neighbor queries require time only logarithmic in n, in particular O (c12 log n) time. Our experimental results show speedups over the brute force search varying between one and several orders of magnitude on natural machine learning datasets.", "We present a simple deterministic data structure for maintaining a set S of points in a general metric space, while supporting proximity search (nearest neighbor and range queries) and updates to S (insertions and deletions). Our data structure consists of a sequence of progressively finer e-nets of S, with pointers that allow us to navigate easily from one scale to the next.We analyze the worst-case complexity of this data structure in terms of the \"abstract dimensionality\" of the metric S. Our data structure is extremely efficient for metrics of bounded dimension and is essentially optimal in a certain model of distance computation. Finally, as a special case, our approach improves over one recently devised by Karger and Ruhl [KR02]." ] }
1103.2635
2225787779
We develop methods for accelerating metric similarity search that are effective on modern hardware. Our algorithms factor into easily parallelizable components, making them simple to deploy and efficient on multicore CPUs and GPUs. Despite the simple structure of our algorithms, their search performance is provably sub linear in the size of the database, with a factor dependent only on its intrinsic dimensionality. We demonstrate that our methods provide substantial speedups on a range of datasets and hardware platforms. In particular, we present results on a 48-core server machine, on graphics hardware, and on a multicore desktop.
Perhaps the two most relevant methods for NN search are the Cover Tree and the GNAT of Brin @cite_0 ; let us distinguish this research from the present work. The GNAT uses a simple space decomposition based on representatives from the database, much as we do, and also discusses the idea of intrinsic dimensionality. However, the relationship of the GNAT's search performance and the intrinsic dimensionality is only discussed in an informal, heuristic way, whereas we give rigorous runtime guarantees. These rigorous bounds require a search algorithm that is different than that of the GNAT. Additionally, parallelization is not discussed in @cite_0 .
{ "cite_N": [ "@cite_0" ], "mid": [ "1554174647" ], "abstract": [ "Given user data, one often wants to find approximate matches in a large database. A good example of such a task is finding images similar to a given image in a large collection of images. We focus on the important and technically diffcult case where each data element is high dimensional, or more generally, is represented by a point in a large metric spaceand distance calculations are computationally expensive. In this paper we introduce a data structure to solve this problem called a GNAT Geometric Near-neighbor Access Tree. It is based on the philosophy that the data structure should act as a hierarchical geometrical model of the data as opposed to a simple decomposition of the data that does not use its intrinsic geometry. In experiments, we find that GNAT's outperform previous data structures in a number of applications. Keywords near neighbor, metric space, approximate queries, data mining, Dirichlet domains, Voronoi regions" ] }
1103.2635
2225787779
We develop methods for accelerating metric similarity search that are effective on modern hardware. Our algorithms factor into easily parallelizable components, making them simple to deploy and efficient on multicore CPUs and GPUs. Despite the simple structure of our algorithms, their search performance is provably sub linear in the size of the database, with a factor dependent only on its intrinsic dimensionality. We demonstrate that our methods provide substantial speedups on a range of datasets and hardware platforms. In particular, we present results on a 48-core server machine, on graphics hardware, and on a multicore desktop.
Lastly, we touch on the major inspiration for this paper: the use of hardware to accelerate data-intensive processes. Impelled by the sudden ubiquity of multicore CPUs and the development of GPUs for general-purpose computation, this area of research has exploded in the last decade; let us provide a couple of inspiring examples. A relatively early work develops methods to off-load expensive database operations onto the GPU @cite_2 . A very recent piece of work tunes basic tree search algorithms (such as for index lookup) to be effective on modern multicore CPUs and GPUs @cite_21 . Finally, another paper suggests simply running brute force search on a GPU to accelerate NN search @cite_14 ; this simple approach provides a surprising amount of acceleration over computation on sequential CPUs @cite_9 .
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_21", "@cite_2" ], "mid": [ "2110764701", "2124592110", "2151224499", "2054468361" ], "abstract": [ "We present a GPU algorithm for the nearest neighbor search, an important database problem. The search is completely performed using the GPU: No further post-processing using the CPU is needed. Our experimental results, using large synthetic and real-world data sets, showed that our GPU algorithm is several times faster than its CPU version.", "Statistical measures coming from information theory represent interesting bases for image and video processing tasks such as image retrieval and video object tracking. For example, let us mention the entropy and the Kullback-Leibler divergence. Accurate estimation of these measures requires to adapt to the local sample density, especially if the data are high-dimensional. The k nearest neighbor (kNN) framework has been used to define efficient variable-bandwidth kernel-based estimators with such a locally adaptive property. Unfortunately, these estimators are computationally intensive since they rely on searching neighbors among large sets of d-dimensional vectors. This computational burden can be reduced by pre-structuring the data, e.g. using binary trees as proposed by the approximated nearest neighbor (ANN) library. Yet, the recent opening of graphics processing units (GPU) to general-purpose computation by means of the NVIDIA CUDA API offers the image and video processing community a powerful platform with parallel calculation capabilities. In this paper, we propose a CUDA implementation of the ldquobrute forcerdquo kNN search and we compare its performances to several CPU-based implementations including an equivalent brute force algorithm and ANN. We show a speed increase on synthetic and real data by up to one or two orders of magnitude depending on the data, with a quasi-linear behavior with respect to the data size in a given, practical range.", "In-memory tree structured index search is a fundamental database operation. Modern processors provide tremendous computing power by integrating multiple cores, each with wide vector units. There has been much work to exploit modern processor architectures for database primitives like scan, sort, join and aggregation. However, unlike other primitives, tree search presents significant challenges due to irregular and unpredictable data accesses in tree traversal. In this paper, we present FAST, an extremely fast architecture sensitive layout of the index tree. FAST is a binary tree logically organized to optimize for architecture features like page size, cache line size, and SIMD width of the underlying hardware. FAST eliminates impact of memory latency, and exploits thread-level and datalevel parallelism on both CPUs and GPUs to achieve 50 million (CPU) and 85 million (GPU) queries per second, 5X (CPU) and 1.7X (GPU) faster than the best previously reported performance on the same architectures. FAST supports efficient bulk updates by rebuilding index trees in less than 0.1 seconds for datasets as large as 64Mkeys and naturally integrates compression techniques, overcoming the memory bandwidth bottleneck and achieving a 6X performance improvement over uncompressed index search for large keys on CPUs.", "We present new algorithms for performing fast computation of several common database operations on commodity graphics processors. Specifically, we consider operations such as conjunctive selections, aggregations, and semi-linear queries, which are essential computational components of typical database, data warehousing, and data mining applications. While graphics processing units (GPUs) have been designed for fast display of geometric primitives, we utilize the inherent pipelining and parallelism, single instruction and multiple data (SIMD) capabilities, and vector processing functionality of GPUs, for evaluating boolean predicate combinations and semi-linear queries on attributes and executing database operations efficiently. Our algorithms take into account some of the limitations of the programming model of current GPUs and perform no data rearrangements. Our algorithms have been implemented on a programmable GPU (e.g. NVIDIA's GeForce FX 5900) and applied to databases consisting of up to a million records. We have compared their performance with an optimized implementation of CPU-based algorithms. Our experiments indicate that the graphics processor available on commodity computer systems is an effective co-processor for performing database operations." ] }
1103.1252
2124157324
Information distributed through the Web keeps growing faster day by day, and for this reason, several techniques for extracting Web data have been suggested during last years. Often, extraction tasks are performed through so called wrappers, procedures extracting information from Web pages, e.g. implementing logic-based techniques. Many fields of application today require a strong degree of robustness of wrappers, in order not to compromise assets of information or reliability of data extracted.
Theoretical background on techniques and algorithms widely adopted in this work relies on several Computer Science and Applied Mathematics fields such as Algorithms and Data Structures and Artificial Intelligence. In the setting of Web data extraction, especially algorithms on (DOM) trees play a predominant role. Approaches to analyze similarities between trees were developed starting from the well-known problem of finding the longest common subsequence(s) between two strings. Several algorithms were suggested, for example Hirshberg @cite_1 provided the proof of correctness of three of them.
{ "cite_N": [ "@cite_1" ], "mid": [ "2165654401" ], "abstract": [ "The problem of finding a longest common subsequence of two strings has been solved in quadratic time and space. An algorithm is presented which will solve this problem in quadratic time and in linear space." ] }
1103.1252
2124157324
Information distributed through the Web keeps growing faster day by day, and for this reason, several techniques for extracting Web data have been suggested during last years. Often, extraction tasks are performed through so called wrappers, procedures extracting information from Web pages, e.g. implementing logic-based techniques. Many fields of application today require a strong degree of robustness of wrappers, in order not to compromise assets of information or reliability of data extracted.
Soon, a strong interconnection between this problem and the similarity between trees has been pointed out: Tai @cite_17 introduced the notion of as measure of the (dis)similarity between two trees and extended the notion of longest common subsequence(s) between strings to trees. Several algorithms were suggested, providing a way to transform a labeled tree in another one through local operations, like inserting, deleting and relabeling nodes. Bille @cite_7 reported, in a comprehensive survey on the tree edit distance and related problems, summarizing approaches and analyzing algorithms.
{ "cite_N": [ "@cite_7", "@cite_17" ], "mid": [ "1978478796", "1975009259" ], "abstract": [ "We survey the problem of comparing labeled trees based on simple local operations of deleting, inserting, and relabeling nodes. These operations lead to the tree edit distance, alignment distance, and inclusion problem. For each problem we review the results available and present, in detail, one or more of the central algorithms for solving the problem.", "The tree-to-tree correctmn problem Is to determine, for two labeled ordered trees T and T', the distance from T to T' as measured by the mlmmum cost sequence of edit operaUons needed to transform T into T' The edit operations investigated allow changing one node of a tree into another node, deleting one node from a tree, or inserting a node into a tree An algorithm Is presented which solves this problem m time O(V* V'*LZ* L'2), where V and V' are the numbers of nodes respectively of T and T', and L and L' are the maximum depths respectively of T and T' Possible apphcatmns are to the problems of measuring the similarity between trees, automatic error recovery and correction for programming languages, and determining the largest common substructure of two trees" ] }
1103.1252
2124157324
Information distributed through the Web keeps growing faster day by day, and for this reason, several techniques for extracting Web data have been suggested during last years. Often, extraction tasks are performed through so called wrappers, procedures extracting information from Web pages, e.g. implementing logic-based techniques. Many fields of application today require a strong degree of robustness of wrappers, in order not to compromise assets of information or reliability of data extracted.
During years, some improvements to tree edit distance techniques have been introduced: Shasha and Zhang @cite_5 provided proof of correctness and implementation of some new parallelizable algorithms for computing edit distances between trees, lowering complexity of @math @math @math , for the non parallel implementation, to @math , for the parallel one; Klein @cite_10 , finally, suggested a fast method for computing the edit distance between unrooted ordered trees in @math . An overview of interesting applications of these algorithms in Computer Science can be found in @cite_11 .
{ "cite_N": [ "@cite_5", "@cite_10", "@cite_11" ], "mid": [ "1976373002", "1605903931", "2016885468" ], "abstract": [ "Ordered labeled trees are trees in which the left-to-right order among siblings is significant. The distance between two ordered trees is considered to be the weighted number of edit operations (insert, delete, and modify) to transform one tree to another. The problem of approximate tree matching is also considered. Specifically, algorithms are designed to answer the following kinds of questions:1. What is the distance between two trees? 2. What is the minimum distance between @math and @math when zero or more subtrees can be removed from @math ? 3. Let the pruning of a tree at node n mean removing all the descendants of node n. The analogous question for prunings as for subtrees is answered.A dynamic programming algorithm is presented to solve the three questions in sequential time @math and space @math compared with $O(|T_1 | |T_2 | ...", "An ordered tree is a tree in which each node's incident edges are cyclically ordered; think of the tree as being embedded in the plane. Let A and B be two ordered trees. The edit distance between A and B is the minimum cost of a sequence of operations (contract an edge, uncontract an edge, modify the label of an edge) needed to transform A into B. We give an O(n3 log n) algorithm to compute the edit distance between two ordered trees.", "In recent years, XML has been established as a major means for information management, and has been broadly utilized for complex data representation (e.g. multimedia objects). Owing to an unparalleled increasing use of the XML standard, developing efficient techniques for comparing XML-based documents becomes essential in the database and information retrieval communities. In this paper, we provide an overview of XML similarity comparison by presenting existing research related to XML similarity. We also detail the possible applications of XML comparison processes in various fields, ranging over data warehousing, data integration, classification clustering and XML querying, and discuss some required and emergent future research directions." ] }
1103.1252
2124157324
Information distributed through the Web keeps growing faster day by day, and for this reason, several techniques for extracting Web data have been suggested during last years. Often, extraction tasks are performed through so called wrappers, procedures extracting information from Web pages, e.g. implementing logic-based techniques. Many fields of application today require a strong degree of robustness of wrappers, in order not to compromise assets of information or reliability of data extracted.
Literature on Web data extraction is manifold: @cite_2 provided a comprehensive survey on application areas and used techniques, and @cite_12 give a very good overview on wrapper generation techniques. Focusing on , Chidlovskii @cite_4 presented some experimental results of combining and applying some grammatical and logic-based rules. @cite_13 developed a machine-learning based system for wrapper verification and reinduction in case of failure in extracting data from Web pages.
{ "cite_N": [ "@cite_13", "@cite_4", "@cite_12", "@cite_2" ], "mid": [ "2115770258", "1995746869", "2005646337", "2148317291" ], "abstract": [ "The proliferation of online information sources has led to an increased use of wrappers for extracting data from Web sources. While most of the previous research has focused on quick and efficient generation of wrappers, the development of tools for wrapper maintenance has received less attention. This is an important research problem because Web sources often change in ways that prevent the wrappers from extracting data correctly. We present an efficient algorithm that learns structural information about data from positive examples alone. We describe how this information can be used for two wrapper maintenance applications: wrapper verification and reinduction. The wrapper verification system detects when a wrapper is not extracting correct data, usually because the Web source has changed its format. The reinduction algorithm automatically recovers from changes in the Web source by identifying data on Web pages so that a new wrapper may be generated for this source. To validate our approach, we monitored 27 wrappers over a period of a year. The verification algorithm correctly discovered 35 of the 37 wrapper changes, and made 16 mistakes, resulting in precision of 0.73 and recall of 0.95. We validated the reinduction algorithm on ten Web sources. We were able to successfully reinduce the wrappers, obtaining precision and recall values of 0.90 and 0.80 on the data extraction task.", "We study the problem of automatic repairing of wrappers for Web information providers. Majority of Web wrappers use \"hooks'' or \"landmarks'' to find and extract relevant information from Web pages and such wrappers often become inoperable when the page structure is changed. The solution we propose in this paper extends conventional forward wrappers with alternative classifiers built using content features of extracted information and wrappers processing pages backward. We report some preliminary results of the information extraction recovery and wrapper repairing for a set of real Web provider changes.", "In the last few years, several works in the literature have addressed the problem of data extraction from Web pages. The importance of this problem derives from the fact that, once extracted, the data can be handled in a way similar to instances of a traditional database. The approaches proposed in the literature to address the problem of Web data extraction use techniques borrowed from areas such as natural language processing, languages and grammars, machine learning, information retrieval, databases, and ontologies. As a consequence, they present very distinct features and capabilities which make a direct comparison difficult to be done. In this paper, we propose a taxonomy for characterizing Web data extraction fools, briefly survey major Web data extraction tools described in the literature, and provide a qualitative analysis of them. Hopefully, this work will stimulate other studies aimed at a more comprehensive analysis of data extraction approaches and tools for Web data.", "Web Data Extraction is an important problem that has been studied by means of different scientific tools and in a broad range of applications. Many approaches to extracting data from the Web have been designed to solve specific problems and operate in ad-hoc domains. Other approaches, instead, heavily reuse techniques and algorithms developed in the field of Information Extraction.This survey aims at providing a structured and comprehensive overview of the literature in the field of Web Data Extraction. We provided a simple classification framework in which existing Web Data Extraction applications are grouped into two main classes, namely applications at the Enterprise level and at the Social Web level. At the Enterprise level, Web Data Extraction techniques emerge as a key tool to perform data analysis in Business and Competitive Intelligence systems as well as for business process re-engineering. At the Social Web level, Web Data Extraction techniques allow to gather a large amount of structured data continuously generated and disseminated by Web 2.0, Social Media and Online Social Network users and this offers unprecedented opportunities to analyze human behavior at a very large scale. We discuss also the potential of cross-fertilization, i.e., on the possibility of re-using Web Data Extraction techniques originally designed to work in a given domain, in other domains." ] }
1103.1252
2124157324
Information distributed through the Web keeps growing faster day by day, and for this reason, several techniques for extracting Web data have been suggested during last years. Often, extraction tasks are performed through so called wrappers, procedures extracting information from Web pages, e.g. implementing logic-based techniques. Many fields of application today require a strong degree of robustness of wrappers, in order not to compromise assets of information or reliability of data extracted.
@cite_16 suggested a new approach, called SG-WRAM (Schema-Guided WRApper Maintenance), for wrapper maintenance, considering that changes in Web pages always preserve syntactic features (i.e. data patterns, string lengths, etc.), hyperlinks and annotations (e.g. descriptive information representing the semantic meaning of a piece of information in its context).
{ "cite_N": [ "@cite_16" ], "mid": [ "2169262681" ], "abstract": [ "Extracting data from Web pages using wrappers is a fundamental problem arising in a large variety of applications of vast practical interests. There are two main issues relevant to Web-data extraction, namely wrapper generation and wrapper maintenance. In this paper, we propose a novel schema-guided approach to the problem of automatic wrapper maintenance. It is based on the observation that despite various page changes, many important features of the pages are preserved, such as syntactic patterns, annotations, and hyperlinks of the extracted data items. Our approach uses these preserved features to identify the locations of the desired values in the changed pages, and repair wrappers correspondingly by inducing semantic blocks from the HTML tree. Our intensive experiments on real Web sites show that the proposed approach can effectively maintain wrappers to extract desired data with high accuracies." ] }
1103.1252
2124157324
Information distributed through the Web keeps growing faster day by day, and for this reason, several techniques for extracting Web data have been suggested during last years. Often, extraction tasks are performed through so called wrappers, procedures extracting information from Web pages, e.g. implementing logic-based techniques. Many fields of application today require a strong degree of robustness of wrappers, in order not to compromise assets of information or reliability of data extracted.
Wong @cite_15 developed a probabilistic framework to adapt a previously learned wrapper to unseen Web pages, including the possibility of discovering new attributes, not included in the first one, relying on the extraction knowledge related to the first wrapping task and on the collection of items gathered from the first Web page.
{ "cite_N": [ "@cite_15" ], "mid": [ "1606262545" ], "abstract": [ "We develop a probabilistic framework for adapting information extraction wrappers with new attribute discovery. Wrapper adaptation aims at automatically adapting a previously learned wrapper from the source Web site to a new unseen site for information extraction. One unique characteristic of our framework is that it can discover new or previously unseen attributes as well as headers from the new site. It is based on a generative model for the generation of text fragments related to attribute items and formatting data in a Web page. To solve the wrapper adaptation problem, we consider two kinds of information from the source Web site. The first kind of information is the extraction knowledge contained in the previously learned wrapper from the source Web site. The second kind of information is the previously extracted or collected items. We employ a Bayesian learning approach to automatically select a set of training examples for adapting a wrapper for the new unseen site. To solve the new attribute discovery problem, we develop a model which analyzes the surrounding text fragments of the attributes in the new unseen site. A Bayesian learning method is developed to discover the new attributes and their headers. EM technique is employed in both Bayesian learning models. We conducted extensive experiments from a number of real-world Web sites to demonstrate the effectiveness of our framework." ] }
1103.1252
2124157324
Information distributed through the Web keeps growing faster day by day, and for this reason, several techniques for extracting Web data have been suggested during last years. Often, extraction tasks are performed through so called wrappers, procedures extracting information from Web pages, e.g. implementing logic-based techniques. Many fields of application today require a strong degree of robustness of wrappers, in order not to compromise assets of information or reliability of data extracted.
@cite_6 already suggested the possibility of exploiting previously acquired information, e.g. queries results, to re-induct a new wrapper from an old one not working anymore, because of structural changes in Web pages. @cite_9 compared results of simple tree matching and a modified weighed version of the same algorithm, in extracting information from HTML Web pages; this approach shares similarities to the one followed here to perform adaptation of wrappers. @cite_3 focused on robustness of wrappers exploiting absolute and relative XPath queries.
{ "cite_N": [ "@cite_9", "@cite_3", "@cite_6" ], "mid": [ "2108730513", "2151192680", "1971518650" ], "abstract": [ "The main issue for effective Web information extraction is how to recognize similar patterns in a Web page. Traditionally, it has been shown that pattern matching by using the HTML DOM tree is more efficient than the simple string matching approach. Nonetheless, previous tree-based pattern matching methods have problems by assuming that all HTML tags have the same values, assigning the same weight to each node in HTML trees. This paper proposes an enhanced tree matching algorithm that improves the tree edit distance method by considering the characteristics of HTML features. We assign different values to different HTML tree nodes according to their weights for displaying the corresponding data objects in the browser. Pattern matching of HTML patterns is done by obtaining the maximum mapping values of two HTML trees that are constructed with weighted node values from HTML data objects. Experiments are done over several Web commerce sites to evaluate the effectiveness of the proposed HTML tree matching algorithm.", "We demonstrate myPortal - an application for web content block extraction and aggregation. The research issues behind the tool are also explained, with an emphasis on robustness of web content extraction.", "During the last years, significant attention has been paid to the problem of building wrappers for extracting data from semistructured web sources. Nevertheless, since web sources are autonomous, they may experience changes that invalidate the wrappers. In this paper, we present new heuristics and algorithms to address the problem of automatic wrapper maintenance. Our approach is based on collecting query results during wrapper operation and using them later to generate new sets of examples that can be used to induce a new wrapper when the source changes." ] }
1103.1689
2157969264
Consider the problem of learning the drift coefficient of a stochastic differential equation from a sample path. In this paper, we assume that the drift is parametrized by a high dimensional vector. We address the question of how long the system needs to be observed in order to learn this vector of parameters. We prove a general lower bound on this time complexity by using a characterization of mutual information as time integral of conditional variance, due to Kadota, Zakai, and Ziv. This general lower bound is applied to specific classes of linear and non-linear stochastic differential equations. In the linear case, the problem under consideration is the one of learning a matrix of interaction coefficients. We evaluate our lower bound for ensembles of sparse and dense random matrices. The resulting estimates match the qualitative behavior of upper bounds achieved by computationally efficient procedures.
Over the last few years, a significant effort has been devoted to developing methods and sample complexity bounds for learning graphical models from data. Particular effort was devoted to learning sparse graphical models using convex regularizations that promote sparsity. Well known examples in the context of Gaussian graphical models include the LASSO @cite_15 and the pseudo-likelihood method of @cite_4 . These papers assume that the data are i.i.d. samples from a high-dimensional Gaussian distribution. However in many cases samples are produced by an underlying dynamical process and the i.i.d. assumption is unrealistic.
{ "cite_N": [ "@cite_15", "@cite_4" ], "mid": [ "2132555912", "2010824638" ], "abstract": [ "SUMMARY We consider the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. Using a coordinate descent procedure for the lasso, we develop a simple algorithm—the graphical lasso—that is remarkably fast: It solves a 1000-node problem (∼500 000 parameters) in at most a minute and is 30–4000 times faster than competing methods. It also provides a conceptual link between the exact problem and the approximation suggested by Meinshausen and B¨ uhlmann (2006). We illustrate the method on some cell-signaling data from proteomics.", "The pattern of zero entries in the inverse covariance matrix of a multivariate normal distribution corresponds to conditional independence restrictions between variables. Covariance selection aims at estimating those structural zeros from data. We show that neighborhood selection with the Lasso is a computationally attractive alternative to standard covariance selection for sparse high-dimensional graphs. Neighborhood selection estimates the conditional independence restrictions separately for each node in the graph and is hence equivalent to variable selection for Gaussian linear models. We show that the proposed neighborhood selection scheme is consistent for sparse high-dimensional graphs. Consistency hinges on the choice of the penalty parameter. The oracle value for optimal prediction does not lead to a consistent neighborhood estimate. Controlling instead the probability of falsely joining some distinct connectivity components of the graph, consistent estimation for sparse graphs is achieved (with exponential rates), even when the number of variables grows as the number of observations raised to an arbitrary power." ] }
1103.1689
2157969264
Consider the problem of learning the drift coefficient of a stochastic differential equation from a sample path. In this paper, we assume that the drift is parametrized by a high dimensional vector. We address the question of how long the system needs to be observed in order to learn this vector of parameters. We prove a general lower bound on this time complexity by using a characterization of mutual information as time integral of conditional variance, due to Kadota, Zakai, and Ziv. This general lower bound is applied to specific classes of linear and non-linear stochastic differential equations. In the linear case, the problem under consideration is the one of learning a matrix of interaction coefficients. We evaluate our lower bound for ensembles of sparse and dense random matrices. The resulting estimates match the qualitative behavior of upper bounds achieved by computationally efficient procedures.
In @cite_1 , a convex regularization method was developed to learn linear SDE's with a sparse network structure from data. The upper bounds on the sample complexity proved in @cite_1 match in several cases the lower bounds developed here. The related topic of learning graphical models for autoregressive processes was studied recently in @cite_14 @cite_0 . These papers propose a convex relaxation different from the one of @cite_1 , without however developing estimates on the sample complexity for model selection.
{ "cite_N": [ "@cite_0", "@cite_14", "@cite_1" ], "mid": [ "2112856361", "562372631", "2154104617" ], "abstract": [ "An algorithm is presented for topology selection in graphical models of autoregressive Gaussian time series. The graph topology of the model represents the sparsity pattern of the inverse spectrum of the time series and characterizes conditional independence relations between the variables. The method proposed in the paper is based on an l1-type nonsmooth regularization of the conditional maximum likelihood estimation problem. We show that this reduces to a convex optimization problem and describe a large-scale algorithm that solves the dual problem via the gradient projection method. Results of experiments with randomly generated and real data sets are also included.", "1. Automatic code generation for real-time convex optimization J. Mattingley and S. Boyd 2. Gradient-based algorithms with applications to signal recovery problems A. Beck and M. Teboulle 3. Graphical models of autoregressive processes J. Songsiri, J. Dahl and L. Vandenberghe 4. SDP relaxation of homogeneous quadratic optimization Z. Q. Luo and T. H. Chang 5. Probabilistic analysis of SDR detectors for MIMO systems A. Man-Cho So and Y. Ye 6. Semidefinite programming, matrix decomposition, and radar code design Y. Huang, A. De Maio and S. Zhang 7. Convex analysis for non-negative blind source separation with application in imaging W. K. Ma, T. H. Chan, C. Y. Chi and Y. Wang 8. Optimization techniques in modern sampling theory T. Michaeli and Y. C. Eldar 9. Robust broadband adaptive beamforming using convex optimization M. Rubsamen, A. El-Keyi, A. B. Gershman and T. Kirubarajan 10. Cooperative distributed multi-agent optimization A. Nenadic and A. Ozdaglar 11. Competitive optimization of cognitive radio MIMO systems via game theory G. Scutari, D. P. Palomar and S. Barbarossa 12. Nash equilibria: the variational approach F. Facchinei and J. S. Pang.", "We consider linear models for stochastic dynamics. To any such model can be associated a network (namely a directed graph) describing which degrees of freedom interact under the dynamics. We tackle the problem of learning such a network from observation of the system trajectory over a time interval @math . We analyze the @math -regularized least squares algorithm and, in the setting in which the underlying network is sparse, we prove performance guarantees that are as long as this is sufficiently high. This result substantiates the notion of a well defined time complexity' for the network inference problem." ] }
1103.1689
2157969264
Consider the problem of learning the drift coefficient of a stochastic differential equation from a sample path. In this paper, we assume that the drift is parametrized by a high dimensional vector. We address the question of how long the system needs to be observed in order to learn this vector of parameters. We prove a general lower bound on this time complexity by using a characterization of mutual information as time integral of conditional variance, due to Kadota, Zakai, and Ziv. This general lower bound is applied to specific classes of linear and non-linear stochastic differential equations. In the linear case, the problem under consideration is the one of learning a matrix of interaction coefficients. We evaluate our lower bound for ensembles of sparse and dense random matrices. The resulting estimates match the qualitative behavior of upper bounds achieved by computationally efficient procedures.
Finally, a substantial literature addresses various questions related to learning SDE's @cite_5 @cite_8 @cite_12 . However this line of work did not yield quantitative estimates on the scaling of sample complexity with the problem dimensionality.
{ "cite_N": [ "@cite_5", "@cite_12", "@cite_8" ], "mid": [ "", "2131766020", "2073644805" ], "abstract": [ "", "Many students are familiar with the idea of modeling chemical reactions in terms of ordinary differential equations. However, these deterministic reaction rate equations are really a certain large-scale limit of a sequence of finer-scale probabilistic models. In studying this hierarchy of models, students can be exposed to a range of modern ideas in applied and computational mathematics. This article introduces some of the basic concepts in an accessible manner and points to some challenges that currently occupy researchers in this area. Short, downloadable MATLAB codes are listed and described.", "We study the problem of parameter estimation for time-series possessing two, widely separated, characteristic time scales. The aim is to understand situations where it is desirable to fit a homogenized single-scale model to such multiscale data. We demonstrate, numerically and analytically, that if the data is sampled too finely then the parameter fit will fail, in that the correct parameters in the homogenized model are not identified. We also show, numerically and analytically, that if the data is subsampled at an appropriate rate then it is possible to estimate the coefficients of the homogenized model correctly." ] }
1103.1417
2142153450
We consider the problem of positioning a cloud of points in the Euclidean space ℝ d , using noisy measurements of a subset of pairwise distances. This task has applications in various areas, such as sensor network localization and reconstruction of protein conformations from NMR measurements. It is also closely related to dimensionality reduction problems and manifold learning, where the goal is to learn the underlying global geometry of a data set using local (or partial) metric information. Here we propose a reconstruction algorithm based on semidefinite programming. For a random geometric graph model and uniformly bounded noise, we provide a precise characterization of the algorithm’s performance: in the noiseless case, we find a radius r 0 beyond which the algorithm reconstructs the exact positions (up to rigid transformations). In the presence of noise, we obtain upper and lower bounds on the reconstruction error that match up to a factor that depends only on the dimension d, and the average degree of the nodes in the graph.
The localization problem and its variants have attracted significant interest over the past years due to their applications in numerous areas, such as sensor network localization @cite_20 , NMR spectroscopy @cite_25 , and manifold learning @cite_22 @cite_24 , to name a few.
{ "cite_N": [ "@cite_24", "@cite_22", "@cite_25", "@cite_20" ], "mid": [ "2001141328", "2063532964", "2130185724", "2156865565" ], "abstract": [ "Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs—30,000 auditory nerve fibers or 106 optic nerve fibers—a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure.", "The problem of dimensionality reduction arises in many fields of information processing, including machine learning, data compression, scientific visualization, pattern recognition, and neural computation. Here we describe locally linear embedding (LLE), an unsupervised learning algorithm that computes low dimensional, neighborhood preserving embeddings of high dimensional data. The data, assumed to be sampled from an underlying manifold, are mapped into a single global coordinate system of lower dimensionality. The mapping is derived from the symmetries of locally linear reconstructions, and the actual computation of the embedding reduces to a sparse eigenvalue problem. Notably, the optimizations in LLE---though capable of generating highly nonlinear embeddings---are simple to implement, and they do not involve local minima. In this paper, we describe the implementation of the algorithm in detail and discuss several extensions that enhance its performance. We present results of the algorithm applied to data sampled from known manifolds, as well as to collections of images of faces, lips, and handwritten digits. These examples are used to provide extensive illustrations of the algorithm's performance---both successes and failures---and to relate the algorithm to previous and ongoing work in nonlinear dimensionality reduction.", "We develop and apply a previously undescribed framework that is designed to extract information in the form of a positive definite kernel matrix from possibly crude, noisy, incomplete, inconsistent dissimilarity information between pairs of objects, obtainable in a variety of contexts. Any positive definite kernel defines a consistent set of distances, and the fitted kernel provides a set of coordinates in Euclidean space that attempts to respect the information available while controlling for complexity of the kernel. The resulting set of coordinates is highly appropriate for visualization and as input to classification and clustering algorithms. The framework is formulated in terms of a class of optimization problems that can be solved efficiently by using modern convex cone programming software. The power of the method is illustrated in the context of protein clustering based on primary sequence data. An application to the globin family of proteins resulted in a readily visualizable 3D sequence space of globins, where several subfamilies and subgroupings consistent with the literature were easily identifiable.", "We describe an SDP relaxation based method for the position estimation problem in wireless sensor networks. The optimization problem is set up so as to minimize the error in sensor positions to fit distance measures. Observable gauges are developed to check the quality of the point estimation of sensors or to detect erroneous sensors. The performance of this technique is highly satisfactory compared to other techniques. Very few anchor nodes are required to accurately estimate the position of all the unknown nodes in a network. Also the estimation errors are minimal even when the anchor nodes are not suitably placed within the network or the distance measurements are noisy." ] }
1103.1417
2142153450
We consider the problem of positioning a cloud of points in the Euclidean space ℝ d , using noisy measurements of a subset of pairwise distances. This task has applications in various areas, such as sensor network localization and reconstruction of protein conformations from NMR measurements. It is also closely related to dimensionality reduction problems and manifold learning, where the goal is to learn the underlying global geometry of a data set using local (or partial) metric information. Here we propose a reconstruction algorithm based on semidefinite programming. For a random geometric graph model and uniformly bounded noise, we provide a precise characterization of the algorithm’s performance: in the noiseless case, we find a radius r 0 beyond which the algorithm reconstructs the exact positions (up to rigid transformations). In the presence of noise, we obtain upper and lower bounds on the reconstruction error that match up to a factor that depends only on the dimension d, and the average degree of the nodes in the graph.
The existing algorithms can be categorized in to two groups. The first group consists of algorithms who try first to estimate the missing distances and then use MDS to find the positions from the reconstructed distance matrix @cite_8 @cite_11 . MDS-MAP @cite_11 and ISOMAP @cite_24 are two well-known examples of this class where the missing entries of the distance matrix are approximated by computing the shortest paths between all pairs of nodes. The algorithms in the second group formulate the localization problem as a non-convex optimization problem and then use different relaxation schemes to solve it. An example of this type is relaxation to an SDP @cite_20 @cite_9 @cite_0 @cite_17 @cite_23 . A crucial assumption in these works is the existence of some anchors among the nodes whose exact positions are known. The SDP is then used to efficiently check whether the graph is uniquely @math -localizable and to find its unique realization.
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_17", "@cite_24", "@cite_0", "@cite_23", "@cite_20", "@cite_11" ], "mid": [ "2071494625", "", "2125947724", "2001141328", "", "198244778", "2156865565", "" ], "abstract": [ "Sensor localization from only connectivity information is a highly challenging problem. To this end, our result for the first time establishes an analytic bound on the performance of the popular MDS-MAP algorithm based on multidimensional scaling. For a network consisting of n sensors positioned randomly on a unit square and a given radio range r = o(1), we show that resulting error is bounded, decreasing at a rate that is inversely proportional to r, when only connectivity information is given. The same bound holds for the range-based model, when we have an approximate measurements for the distances, and the same algorithm can be applied without any modification.", "", "Given a partial symmetric matrix A with only certain elements specified, the Euclidean distance matrix completion problem (EDMCP) is to find the unspecified elements of A that make A a Euclidean distance matrix (EDM). In this paper, we follow the successful approach in [20] and solve the EDMCP by generalizing the completion problem to allow for approximate completions. In particular, we introduce a primal-dual interior-point algorithm that solves an equivalent (quadratic objective function) semidefinite programming problem (SDP). Numerical results are included which illustrate the efficiency and robustness of our approach. Our randomly generated problems consistently resulted in low dimensional solutions when no completion existed.", "Scientists working with large volumes of high-dimensional data, such as global climate patterns, stellar spectra, or human gene distributions, regularly confront the problem of dimensionality reduction: finding meaningful low-dimensional structures hidden in their high-dimensional observations. The human brain confronts the same problem in everyday perception, extracting from its high-dimensional sensory inputs—30,000 auditory nerve fibers or 106 optic nerve fibers—a manageably small number of perceptually relevant features. Here we describe an approach to solving dimensionality reduction problems that uses easily measured local metric information to learn the underlying global geometry of a data set. Unlike classical techniques such as principal component analysis (PCA) and multidimensional scaling (MDS), our approach is capable of discovering the nonlinear degrees of freedom that underlie complex natural observations, such as human handwriting or images of a face under different viewing conditions. In contrast to previous algorithms for nonlinear dimensionality reduction, ours efficiently computes a globally optimal solution, and, for an important class of data manifolds, is guaranteed to converge asymptotically to the true structure.", "", "Many problems in AI are simplified by clever representations of sensory or symbolic input. How to discover such representations automatically, from large amounts of unlabeled data, remains a fundamental challenge. The goal of statistical methods for dimensionality reduction is to detect and discover low dimensional structure in high dimensional data. In this paper, we review a recently proposed algorithm-- maximum, variance unfolding--for learning faithful low dimensional representations of high dimensional data. The algorithm relies on modem tools in convex optimization that are proving increasingly useful in many areas of machine learning.", "We describe an SDP relaxation based method for the position estimation problem in wireless sensor networks. The optimization problem is set up so as to minimize the error in sensor positions to fit distance measures. Observable gauges are developed to check the quality of the point estimation of sensors or to detect erroneous sensors. The performance of this technique is highly satisfactory compared to other techniques. Very few anchor nodes are required to accurately estimate the position of all the unknown nodes in a network. Also the estimation errors are minimal even when the anchor nodes are not suitably placed within the network or the distance measurements are noisy.", "" ] }
1103.1417
2142153450
We consider the problem of positioning a cloud of points in the Euclidean space ℝ d , using noisy measurements of a subset of pairwise distances. This task has applications in various areas, such as sensor network localization and reconstruction of protein conformations from NMR measurements. It is also closely related to dimensionality reduction problems and manifold learning, where the goal is to learn the underlying global geometry of a data set using local (or partial) metric information. Here we propose a reconstruction algorithm based on semidefinite programming. For a random geometric graph model and uniformly bounded noise, we provide a precise characterization of the algorithm’s performance: in the noiseless case, we find a radius r 0 beyond which the algorithm reconstructs the exact positions (up to rigid transformations). In the presence of noise, we obtain upper and lower bounds on the reconstruction error that match up to a factor that depends only on the dimension d, and the average degree of the nodes in the graph.
Maximum Variance Unfolding (MVU) is an SDP-based algorithm with a very similar flavor as ours @cite_23 . MVU is an approach to solving dimensionality reduction problems using local metric information and is based on the following simple interpretation. Assume @math points lying on a low dimensional manifold in a high dimensional ambient space. In order to find a low dimensional representation of this data set, the algorithm attempts to somehow unfold the underlying manifold. To this end, MVU pulls the points apart in the ambient space, maximizing the total sum of their pairwise distances, while respecting the local information. However, to the best of our knowledge, no performance guarantee has been proved for the MVU algorithm.
{ "cite_N": [ "@cite_23" ], "mid": [ "198244778" ], "abstract": [ "Many problems in AI are simplified by clever representations of sensory or symbolic input. How to discover such representations automatically, from large amounts of unlabeled data, remains a fundamental challenge. The goal of statistical methods for dimensionality reduction is to detect and discover low dimensional structure in high dimensional data. In this paper, we review a recently proposed algorithm-- maximum, variance unfolding--for learning faithful low dimensional representations of high dimensional data. The algorithm relies on modem tools in convex optimization that are proving increasingly useful in many areas of machine learning." ] }
1103.1157
2951152322
In Artificial Intelligence with Coalition Structure Generation (CSG) one refers to those cooperative complex problems that require to find an optimal partition, maximising a social welfare, of a set of entities involved in a system into exhaustive and disjoint coalitions. The solution of the CSG problem finds applications in many fields such as Machine Learning (covering machines, clustering), Data Mining (decision tree, discretization), Graph Theory, Natural Language Processing (aggregation), Semantic Web (service composition), and Bioinformatics. The problem of finding the optimal coalition structure is NP-complete. In this paper we present a greedy adaptive search procedure (GRASP) with path-relinking to efficiently search the space of coalition structures. Experiments and comparisons to other algorithms prove the validity of the proposed method in solving this hard combinatorial problem.
Both DP and IDP are not anytime algorithms, they cannot be interrupted before their normal termination. @cite_1 , have presented the first anytime algorithm, sketched in Algorithm , that can be interrupted to obtain a solution within a time limit but not guaranteed to be optimal. When not interrupted it returns the optimal solution. The CSG process can be viewed as a search in a coalition structure graph as reported in Figure . One desideratum is to be able to guarantee that the CS is within a worst case bound from optimal, i.e. that searching through a subset @math of coalition structures, @math is finite, and as small as possible, where @math is the best CS and @math is the best CS that has been seen in the subset @math . @cite_1 has been proved that: a) to bound @math , it suffices to search the lowest two levels of the coalition structure graph (with this search, the bound @math , and the number of nodes searched is @math ); b) this bound is tight; and, c) no other search algorithm can establish any bound @math while searching only @math nodes or fewer.
{ "cite_N": [ "@cite_1" ], "mid": [ "2156887976" ], "abstract": [ "Coalition formation is a key topic in multiagent systems. One may prefer a coalition structure that maximizes the sum of the values of the coalitions, but often the number of coalition structures is too large to allow exhaustive search for the optimal one. Furthermore, finding the optimal coalition structure is NP-complete. But then, can the coalition structure found via a partial search be guaranteed to be within a bound from optimum? We show that none of the previous coalition structure generation algorithms can establish any bound because they search fewer nodes than a threshold that we show necessary for establishing a bound. We present an algorithm that establishes a tight bound within this minimal amount of search, and show that any other algorithm would have to search strictly more. The fraction of nodes needed to be searched approaches zero as the number of agents grows. If additional time remains, our anytime algorithm searches further, and establishes a progressively lower tight bound. Surprisingly, just searching one more node drops the bound in half. As desired, our algorithm lowers the bound rapidly early on, and exhibits diminishing returns to computation. It also significantly outperforms its obvious contenders. Finally, we show how to distribute the desired search across self-interested manipulative agents. © 1999 Elsevier Science B.V. All rights reserved." ] }
1103.1157
2951152322
In Artificial Intelligence with Coalition Structure Generation (CSG) one refers to those cooperative complex problems that require to find an optimal partition, maximising a social welfare, of a set of entities involved in a system into exhaustive and disjoint coalitions. The solution of the CSG problem finds applications in many fields such as Machine Learning (covering machines, clustering), Data Mining (decision tree, discretization), Graph Theory, Natural Language Processing (aggregation), Semantic Web (service composition), and Bioinformatics. The problem of finding the optimal coalition structure is NP-complete. In this paper we present a greedy adaptive search procedure (GRASP) with path-relinking to efficiently search the space of coalition structures. Experiments and comparisons to other algorithms prove the validity of the proposed method in solving this hard combinatorial problem.
As regards the approximate algorithms, in @cite_18 it has been proposed a solution based on a genetic algorithm, which performs well when there is some regularity in the search space. Indeed, the authors assume, in order to apply their algorithm, that the value of a coalition is dependent of other coalitions in the CS, making the algorithm not well suited for the general case. A new solution @cite_13 is based on a Simulated Annealing algorithm @cite_23 , a widely used stochastic local search method. At each iteration the algorithm selects a random neighbour solution @math of a CS @math . The search proceeds with an adjacent CS @math of the original CS @math if @math yields a better social welfare than @math . Otherwise, the search is continued with @math with probability @math , where @math is the temperature parameter that decreases according to the annealing schedule @math .
{ "cite_N": [ "@cite_18", "@cite_13", "@cite_23" ], "mid": [ "2139662247", "", "2024060531" ], "abstract": [ "Coalition formation has been a very active area of research in multiagent systems. Most of this research has concentrated on decentralized procedures that allow self-interested agents to negotiate the formation of coalitions and division of coalition payoffs. A different line of research has addressed the problem of finding the optimal division of agents into coalitions such that the sum total of the the payoffs to all the coalitions is maximized (Larson and Sandholm, 1999). This is the optimal coalition structure identification problem. Deterministic search algorithms have been proposed and evaluated under the assumption that the performance of a coalition is independent of other coalitions. We use an order-based genetic algorithm (OBGA) as a stochastic search process to identify the optimal coalition structure. We compare the performance of the OBGA with a representative deterministic algorithm presented in the literature. Though the OBGA has no performance guarantees, it is found to dominate the deterministic algorithm in a significant number of problem settings. An additional advantage of the OBGA is its scalability to larger problem sizes and to problems where performance of a coalition depends on other coalitions in the environment.", "", "There is a deep and useful connection between statistical mechanics (the behavior of systems with many degrees of freedom in thermal equilibrium at a finite temperature) and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters). A detailed analogy with annealing in solids provides a framework for optimization of the properties of very large and complex systems. This connection to statistical mechanics exposes new information and provides an unfamiliar perspective on traditional optimization problems and methods." ] }
1103.0949
1729157025
When dealing with time series with complex non-stationarities, low retrospective regret on individual realizations is a more appropriate goal than low prospective risk in expectation. Online learning algorithms provide powerful guarantees of this form, and have often been proposed for use with non-stationary processes because of their ability to switch between different forecasters or experts''. However, existing methods assume that the set of experts whose forecasts are to be combined are all given at the start, which is not plausible when dealing with a genuinely historical or evolutionary system. We show how to modify the fixed shares'' algorithm for tracking the best expert to cope with a steadily growing set of experts, obtained by fitting new models to new data as it becomes available, and obtain regret bounds for the growing ensemble.
The closest approach to our method is that of Hazan and Seshadhri @cite_12 , who also work within the family of variants on multiplicative weight training. They introduce a new expert at each time step, whose initial weight is a fixed function of time, and do not otherwise implement a fixed share'' of weights, i.e., a minimum weight for each expert. Maintaining such a fixed share is extremely useful when a pre-existing model becomes one of the best, drastically cutting the time needed for it to dominate the ensemble. @cite_12 also does not use tracking regret, but rather the maximum regret against any single expert attained over any contiguous time interval. This time-uniform regret is attractive, and they prove bounds on it, but only by assuming that each individual expert itself has a low, time-uniform regret (in the ordinary sense); some of their results even require low losses, not just low regrets. Our approach, by contrast, is able to accommodate the much more realistic situation where each individual expert may indeed have high loss, or even high regret, because the process is hard to predict and no one model is uniformly applicable.
{ "cite_N": [ "@cite_12" ], "mid": [ "2028132509" ], "abstract": [ "This paper describes a methodology for detecting anomalies from sequentially observed and potentially noisy data. The proposed approach consists of two main elements: 1) filtering, or assigning a belief or likelihood to each successive measurement based upon our ability to predict it from previous noisy observations and 2) hedging, or flagging potential anomalies by comparing the current belief against a time-varying and data-adaptive threshold. The threshold is adjusted based on the available feedback from an end user. Our algorithms, which combine universal prediction with recent work on online convex programming, do not require computing posterior distributions given all current observations and involve simple primal-dual parameter updates. At the heart of the proposed approach lie exponential-family models which can be used in a wide variety of contexts and applications, and which yield methods that achieve sublinear per-round regret against both static and slowly varying product distributions with marginals drawn from the same exponential family. Moreover, the regret against static distributions coincides with the minimax value of the corresponding online strongly convex game. We also prove bounds on the number of mistakes made during the hedging step relative to the best offline choice of the threshold with access to all estimated beliefs and feedback signals. We validate the theory on synthetic data drawn from a time-varying distribution over binary vectors of high dimensionality, as well as on the Enron email dataset." ] }
1103.0949
1729157025
When dealing with time series with complex non-stationarities, low retrospective regret on individual realizations is a more appropriate goal than low prospective risk in expectation. Online learning algorithms provide powerful guarantees of this form, and have often been proposed for use with non-stationary processes because of their ability to switch between different forecasters or experts''. However, existing methods assume that the set of experts whose forecasts are to be combined are all given at the start, which is not plausible when dealing with a genuinely historical or evolutionary system. We show how to modify the fixed shares'' algorithm for tracking the best expert to cope with a steadily growing set of experts, obtained by fitting new models to new data as it becomes available, and obtain regret bounds for the growing ensemble.
Turning to more conventional approaches, econometrics has a large literature on detecting non-stationarity (of the basically-harmless integrated'' type characteristic of random walks), and finding structural breaks'' (change points), after which models must be re-estimated or re-specified @cite_3 . Economists do not seem to have considered an ensemble method like ours, perhaps due to their laudable (if unfulfilled) ambition to capture the exact data-generating process in a single parsimonious model. Similarly, most work on data-set shift and concept drift in machine learning @cite_13 deals with how a single model should be learned (or modified) so as to be robust to various changes in the joint distribution of inputs and outputs. Unlike all these approaches, we do not have to assume that any of our models are well-specified, nor assume anything about the nature of the data-generating process or how it changes over time.
{ "cite_N": [ "@cite_13", "@cite_3" ], "mid": [ "2162651021", "1559361175" ], "abstract": [ "Dataset shift is a common problem in predictive modeling that occurs when the joint distribution of inputs and outputs differs between training and test stages. Covariate shift, a particular case of dataset shift, occurs when only the input distribution changes. Dataset shift is present in most practical applications, for reasons ranging from the bias introduced by experimental design to the irreproducibility of the testing conditions at training time. (An example is -email spam filtering, which may fail to recognize spam that differs in form from the spam the automatic filter has been built on.) Despite this, and despite the attention given to the apparently similar problems of semi-supervised learning and active learning, dataset shift has received relatively little attention in the machine learning community until recently. This volume offers an overview of current efforts to deal with dataset and covariate shift. The chapters offer a mathematical and philosophical introduction to the problem, place dataset shift in relationship to transfer learning, transduction, local learning, active learning, and semi-supervised learning, provide theoretical views of dataset and covariate shift (including decision theoretic and Bayesian perspectives), and present algorithms for covariate shift. Contributors: Shai Ben-David, Steffen Bickel, Karsten Borgwardt, Michael Brckner, David Corfield, Amir Globerson, Arthur Gretton, Lars Kai Hansen, Matthias Hein, Jiayuan Huang, Takafumi Kanamori, Klaus-Robert Mller, Sam Roweis, Neil Rubens, Tobias Scheffer, Marcel Schmittfull, Bernhard Schlkopf, Hidetoshi Shimodaira, Alex Smola, Amos Storkey, Masashi Sugiyama, Choon Hui Teo Neural Information Processing series", "Economies evolve and are subject to sudden shifts precipitated by legislative changes, economic policy, major discoveries, and political turmoil. Macroeconometric models are a very imperfect tool for forecasting this highly complicated and changing process. Ignoring these factors leads to a wide discrepancy between theory and practice. In their second book on economic forecasting, Michael Clements and David Hendry ask why some practices seem to work empirically despite a lack of formal support from theory. After reviewing the conventional approach to economic forecasting, they look at the implications for causal modeling, present a taxonomy of forecast errors, and delineate the sources of forecast failure. They show that forecast-period shifts in deterministic factors--interacting with model misspecification, collinearity, and inconsistent estimation--are the dominant source of systematic failure. They then consider various approaches for avoiding systematic forecasting errors, including intercept corrections, differencing, co-breaking, and modeling regime shifts; they emphasize the distinction between equilibrium correction (based on cointegration) and error correction (automatically offsetting past errors). Their results on forecasting have wider implications for the conduct of empirical econometric research, model formulation, the testing of economic hypotheses, and model-based policy analyses." ] }
1103.0949
1729157025
When dealing with time series with complex non-stationarities, low retrospective regret on individual realizations is a more appropriate goal than low prospective risk in expectation. Online learning algorithms provide powerful guarantees of this form, and have often been proposed for use with non-stationary processes because of their ability to switch between different forecasters or experts''. However, existing methods assume that the set of experts whose forecasts are to be combined are all given at the start, which is not plausible when dealing with a genuinely historical or evolutionary system. We show how to modify the fixed shares'' algorithm for tracking the best expert to cope with a steadily growing set of experts, obtained by fitting new models to new data as it becomes available, and obtain regret bounds for the growing ensemble.
There are some ensemble methods which are reminiscent of aspects of our proposal, such as Kolter and Maloof's additive expert ensemble'' algorithm AddExp @cite_1 , the incremental-learning SEA algorithm @cite_2 , and adaptive time windows algorithms (e.g. @cite_0 ). None of these allow the full combination of a growing ensemble with temporally-specialized experts and adaptive weights. Consequently, while some of them can handle mild non-stationarities if the base models are close to well-specified, none of them are able to make strong individual-sequence prediction guarantees like those of Theorem .
{ "cite_N": [ "@cite_0", "@cite_1", "@cite_2" ], "mid": [ "2171809276", "2096846143", "1990079212" ], "abstract": [ "We present an ensemble method for concept drift that dynamically creates and removes weighted experts in response to changes in performance. The method, dynamic weighted majority (*DWM*), uses four mechanisms to cope with concept drift: It trains online learners of the ensemble, it weights those learners based on their performance, it removes them, also based on their performance, and it adds new experts based on the global performance of the ensemble. After an extensive evaluation---consisting of five experiments, eight learners, and thirty data sets that varied in type of target concept, size, presence of noise, and the like---we concluded that *DWM* outperformed other learners that only incrementally learn concept descriptions, that maintain and use previously encountered examples, and that employ an unweighted, fixed-size ensemble of experts.", "We consider online learning where the target concept can change over time. Previous work on expert prediction algorithms has bounded the worst-case performance on any subsequence of the training data relative to the performance of the best expert. However, because these \"experts\" may be difficult to implement, we take a more general approach and bound performance relative to the actual performance of any online learner on this single subsequence. We present the additive expert ensemble algorithm AddExp, a new, general method for using any online learner for drifting concepts. We adapt techniques for analyzing expert prediction algorithms to prove mistake and loss bounds for a discrete and a continuous version of AddExp. Finally, we present pruning methods and empirical results for data sets with concept drift.", "Ensemble methods have recently garnered a great deal of attention in the machine learning community. Techniques such as Boosting and Bagging have proven to be highly effective but require repeated resampling of the training data, making them inappropriate in a data mining context. The methods presented in this paper take advantage of plentiful data, building separate classifiers on sequential chunks of training points. These classifiers are combined into a fixed-size ensemble using a heuristic replacement strategy. The result is a fast algorithm for large-scale or streaming data that classifies as well as a single decision tree built on all the data, requires approximately constant memory, and adjusts quickly to concept drift." ] }
1103.0759
1512975556
In hardware virtualization a hypervisor provides multiple Virtual Machines (VMs) on a single physical system, each executing a separate operating system instance. The hypervisor schedules execution of these VMs much as the scheduler in an operating system does, balancing factors such as fairness and I O performance. As in an operating system, the scheduler may be vulnerable to malicious behavior on the part of users seeking to deny service to others or maximize their own resource usage. Recently, publically available cloud computing services such as Amazon EC2 have used virtualization to provide customers with virtual machines running on the provider's hardware, typically charging by wall clock time rather than resources consumed. Under this business model, manipulation of the scheduler may allow theft of service at the expense of other customers, rather than merely reallocating resources within the same administrative domain. We describe a flaw in the Xen scheduler allowing virtual machines to consume almost all CPU time, in preference to other users, and demonstrate kernel-based and user-space versions of the attack. We show results demonstrating the vulnerability in the lab, consuming as much as 98 of CPU time regardless of fair share, as well as on Amazon EC2, where Xen modifications protect other users but still allow theft of service. In case of EC2, following the responsible disclosure model, we have reported this vulnerability to Amazon; they have since implemented a fix that we have tested and verified (See Appendix B). We provide a novel analysis of the necessary conditions for such attacks, and describe scheduler modifications to eliminate the vulnerability. We present experimental results demonstrating the effectiveness of these defenses while imposing negligible overhead.
Cherkasova and Gupta @cite_11 @cite_12 have done an extensive performance analysis of scheduling in the Xen VMM. They studied I O performance for the three schedulers: BVT, SEDF and Credit scheduler. Their work showed that both the CPU scheduling algorithm and the scheduler parameters drastically impact the I O performance. Furthermore, they stressed that the I O model on Xen remains an issue in resource allocation and accounting among VMs. Since Domain-0 is indirectly involved in servicing I O for guest domains, I O intensive domains may receive excess CPU resources by focusing on the processing resources used by Domain-0 on behalf of I O bound domains. To tackle this problem, Gupta @cite_20 introduced the SEDF-DC scheduler, derived from Xen's SEDF scheduler, that charges guest domains for the time spent in Domain-0 on their behalf.
{ "cite_N": [ "@cite_20", "@cite_12", "@cite_11" ], "mid": [ "1939576174", "", "2614676961" ], "abstract": [ "Virtual machines (VMs) have recently emerged as the basis for allocating resources in enterprise settings and hosting centers. One benefit of VMs in these environments is the ability to multiplex several operating systems on hardware based on dynamically changing system characteristics. However, such multiplexing must often be done while observing per-VM performance guarantees or service level agreements. Thus, one important requirement in this environment is effective performance isolation among VMs. In this paper, we address performance isolation across virtual machines in Xen [1]. For instance, while Xen can allocate fixed shares of CPU among competing VMs, it does not currently account for work done on behalf of individual VMs in device drivers. Thus, the behavior of one VM can negatively impact resources available to other VMs even if appropriate per-VM resource limits are in place. In this paper, we present the design and evaluation of a set of primitives implemented in Xen to address this issue. First, XenMon accurately measures per-VM resource consumption, including work done on behalf of a particular VM in Xen's driver domains. Next, our SEDF-DC scheduler accounts for aggregate VM resource consumption in allocating CPU. Finally, ShareGuard limits the total amount of resources consumed in privileged and driver domains based on administrator-specified limits. Our performance evaluation indicates that our mechanisms effectively enforce performance isolation for a variety of workloads and configurations.", "", "The primary motivation for enterprises to adopt virtualization technologies is to create a more agile and dynamic IT infrastructure -- with server consolidation, high resource utilization, the ability to quickly add and adjust capacity on demand -- while lowering total cost of ownership and responding more effectively to changing business conditions. However, effective management of virtualized IT environments introduces new and unique requirements, such as dynamically resizing and migrating virtual machines (VMs) in response to changing application demands. Such capacity management methods should work in conjunction with the underlying resource management mechanisms. In general, resource multiplexing and scheduling among virtual machines is poorly understood. CPU scheduling for virtual machines, for instance, has largely been borrowed from the process scheduling research in operating systems. However, it is not clear whether a straight-forward port of process schedulers to VM schedulers would perform just as well. We use the open source Xen virtual machine monitor to perform a comparative evaluation of three different CPU schedulers for virtual machines. We analyze the impact of the choice of scheduler and its parameters on application performance, and discuss challenges in estimating the application resource requirements in virtualized environments." ] }
1103.0759
1512975556
In hardware virtualization a hypervisor provides multiple Virtual Machines (VMs) on a single physical system, each executing a separate operating system instance. The hypervisor schedules execution of these VMs much as the scheduler in an operating system does, balancing factors such as fairness and I O performance. As in an operating system, the scheduler may be vulnerable to malicious behavior on the part of users seeking to deny service to others or maximize their own resource usage. Recently, publically available cloud computing services such as Amazon EC2 have used virtualization to provide customers with virtual machines running on the provider's hardware, typically charging by wall clock time rather than resources consumed. Under this business model, manipulation of the scheduler may allow theft of service at the expense of other customers, rather than merely reallocating resources within the same administrative domain. We describe a flaw in the Xen scheduler allowing virtual machines to consume almost all CPU time, in preference to other users, and demonstrate kernel-based and user-space versions of the attack. We show results demonstrating the vulnerability in the lab, consuming as much as 98 of CPU time regardless of fair share, as well as on Amazon EC2, where Xen modifications protect other users but still allow theft of service. In case of EC2, following the responsible disclosure model, we have reported this vulnerability to Amazon; they have since implemented a fix that we have tested and verified (See Appendix B). We provide a novel analysis of the necessary conditions for such attacks, and describe scheduler modifications to eliminate the vulnerability. We present experimental results demonstrating the effectiveness of these defenses while imposing negligible overhead.
Govindan @cite_19 proposed a CPU scheduling algorithm as an extension to Xen's SEDF scheduler that preferentially schedules I O intensive domains. The key idea behind their algorithm is to count the number of packages flowing into or out of each domain and to schedule the one with highest count that has not yet consumed its entire slice.
{ "cite_N": [ "@cite_19" ], "mid": [ "2139190931" ], "abstract": [ "Recent advances in software and architectural support for server virtualization have created interest in using this technology in the design of consolidated hosting platforms. Since virtualization enables easier and faster application migration as well as secure co-location of antagonistic applications, higher degrees of server consolidation are likely to result in such virtualization-based hosting platforms (VHPs). We identify a key shortcoming in existing virtual machine monitors (VMMs) that proves to be an obstacle in operating hosting platforms, such as Internet data centers, under conditions of such high consolidation: CPU schedulers that are agnostic to the communication behavior of modern, multi-tier applications. We develop a new communication-aware CPU scheduling algorithm to alleviate this problem. We implement our algorithm in the Xen VMM and build a prototype VHP on a cluster of servers. Our experimental evaluation with realistic Internet server applications and benchmarks demonstrates the performance cost benefits and the wide applicability of our algorithms. For example, the TPC-W benchmark exhibited improvements in average response times of up to 35 for a variety of consolidation scenarios. A streaming media server hosted on our prototype VHP was able to satisfactorily service up to 3.5 times as many clients as one running on the default Xen." ] }
1103.0759
1512975556
In hardware virtualization a hypervisor provides multiple Virtual Machines (VMs) on a single physical system, each executing a separate operating system instance. The hypervisor schedules execution of these VMs much as the scheduler in an operating system does, balancing factors such as fairness and I O performance. As in an operating system, the scheduler may be vulnerable to malicious behavior on the part of users seeking to deny service to others or maximize their own resource usage. Recently, publically available cloud computing services such as Amazon EC2 have used virtualization to provide customers with virtual machines running on the provider's hardware, typically charging by wall clock time rather than resources consumed. Under this business model, manipulation of the scheduler may allow theft of service at the expense of other customers, rather than merely reallocating resources within the same administrative domain. We describe a flaw in the Xen scheduler allowing virtual machines to consume almost all CPU time, in preference to other users, and demonstrate kernel-based and user-space versions of the attack. We show results demonstrating the vulnerability in the lab, consuming as much as 98 of CPU time regardless of fair share, as well as on Amazon EC2, where Xen modifications protect other users but still allow theft of service. In case of EC2, following the responsible disclosure model, we have reported this vulnerability to Amazon; they have since implemented a fix that we have tested and verified (See Appendix B). We provide a novel analysis of the necessary conditions for such attacks, and describe scheduler modifications to eliminate the vulnerability. We present experimental results demonstrating the effectiveness of these defenses while imposing negligible overhead.
Weng @cite_14 found from their analysis that Xen's asynchronous CPU scheduling strategy wastes considerable physical CPU time. To fix this problem, they presented a hybrid scheduling framework that groups VMs into high-throughput type and concurrent type and determines processing resource allocation among VMs based on type. In a similar vein Kim @cite_17 presented a task-aware VM scheduling mechanism to improve the performance of I O-bound tasks within domains. Their approach employs gray-box techniques to peer into VMs and identify I O-bound tasks in mixed workloads.
{ "cite_N": [ "@cite_14", "@cite_17" ], "mid": [ "1999271841", "2115412237" ], "abstract": [ "The virtualization technology makes it feasible that multiple guest operating systems run on a single physical machine. It is the virtual machine monitor that dynamically maps the virtual CPU of virtual machines to physical CPUs according to the scheduling strategy. The scheduling strategy in Xen schedules virtual CPUs of a virtual machines asynchronously while guarantees the proportion of the CPU time corresponding to its weight, maximizing the throughput of the system. However, this scheduling strategy may deteriorate the performance when the virtual machine is used to execute the concurrent applications such as parallel programs or multithreaded programs. In this paper, we analyze the CPU scheduling problem in the virtual machine monitor theoretically, and the result is that the asynchronous CPU scheduling strategy will waste considerable physical CPU time when the system workload is the concurrent application. Then, we present a hybrid scheduling framework for the CPU scheduling in the virtual machine monitor. There are two types of virtual machines in the system: the high-throughput type and the concurrent type. The virtual machine can be set as the concurrent type when the majority of its workload is concurrent applications in order to reduce the cost of synchronization. Otherwise, it is set as the high-throughput type as the default. Moreover, we implement the hybrid scheduling framework based on Xen, and we will give a description of our implementation in details. At last, we test the performance of the presented scheduling framework and strategy based on the multi-core platform, and the experiment result indicates that the scheduling framework and strategy is feasible to improve the performance of the virtual machine system.", "The use of virtualization is progressively accommodating diverse and unpredictable workloads as being adopted in virtual desktop and cloud computing environments. Since a virtual machine monitor lacks knowledge of each virtual machine, the unpredictableness of workloads makes resource allocation difficult. Particularly, virtual machine scheduling has a critical impact on I O performance in cases where the virtual machine monitor is agnostic about the internal workloads of virtual machines. This paper presents a task-aware virtual machine scheduling mechanism based on inference techniques using gray-box knowledge. The proposed mechanism infers the I O-boundness of guest-level tasks and correlates incoming events with I O-bound tasks. With this information, we introduce partial boosting, which is a priority boosting mechanism with task-level granularity, so that an I O-bound task is selectively scheduled to handle its incoming events promptly. Our technique focuses on improving the performance of I O-bound tasks within heterogeneous workloads by lightweight mechanisms with complete CPU fairness among virtual machines. All implementation is confined to the virtualization layer based on the Xen virtual machine monitor and the credit scheduler. We evaluate our prototype in terms of I O performance and CPU fairness over synthetic mixed workloads and realistic applications." ] }
1103.0759
1512975556
In hardware virtualization a hypervisor provides multiple Virtual Machines (VMs) on a single physical system, each executing a separate operating system instance. The hypervisor schedules execution of these VMs much as the scheduler in an operating system does, balancing factors such as fairness and I O performance. As in an operating system, the scheduler may be vulnerable to malicious behavior on the part of users seeking to deny service to others or maximize their own resource usage. Recently, publically available cloud computing services such as Amazon EC2 have used virtualization to provide customers with virtual machines running on the provider's hardware, typically charging by wall clock time rather than resources consumed. Under this business model, manipulation of the scheduler may allow theft of service at the expense of other customers, rather than merely reallocating resources within the same administrative domain. We describe a flaw in the Xen scheduler allowing virtual machines to consume almost all CPU time, in preference to other users, and demonstrate kernel-based and user-space versions of the attack. We show results demonstrating the vulnerability in the lab, consuming as much as 98 of CPU time regardless of fair share, as well as on Amazon EC2, where Xen modifications protect other users but still allow theft of service. In case of EC2, following the responsible disclosure model, we have reported this vulnerability to Amazon; they have since implemented a fix that we have tested and verified (See Appendix B). We provide a novel analysis of the necessary conditions for such attacks, and describe scheduler modifications to eliminate the vulnerability. We present experimental results demonstrating the effectiveness of these defenses while imposing negligible overhead.
There are a number of other works on improving other aspects of virtualized I O performance @cite_13 @cite_8 @cite_24 @cite_5 @cite_0 and VMM security @cite_23 @cite_1 @cite_21 . To summarize, all of these papers tackle problems of long-term fairness between different classes of VMs such as CPU-bound, I O bound, etc.
{ "cite_N": [ "@cite_8", "@cite_21", "@cite_1", "@cite_24", "@cite_0", "@cite_23", "@cite_5", "@cite_13" ], "mid": [ "2160895517", "2021069544", "2072633121", "2152132676", "2090076638", "1994178809", "2117898742", "199500277" ], "abstract": [ "Currently, I O device virtualization models in virtual machine (VM) environments require involvement of a virtual machine monitor (VMM) and or a privileged VM for each I O operation, which may turn out to be a performance bottleneck for systems with high I O demands, especially those equipped with modern high speed interconnects such as InfiniBand. In this paper, we propose a new device virtualization model called VMM-bypass I O, which extends the idea of OS-bypass originated from user-level communication. Essentially, VMM-bypass allows time-critical I O operations to be carried out directly in guest VMs without involvement of the VMM and or a privileged VM. By exploiting the intelligence found in modern high speed network interfaces, VMM-bypass can significantly improve I O and communication performance for VMs without sacrificing safety or isolation. To demonstrate the idea of VMM-bypass, we have developed a prototype called Xen-IB, which offers InfiniBand virtualization support in the Xen 3.0 VM environment. Xen-IB runs with current InfiniBand hardware and does not require modifications to existing user-level applications or kernel-level drivers that use InfiniBand. Our performance measurements show that Xen-IB is able to achieve nearly the same raw performance as the original InfiniBand driver running in a non-virtualized environment.", "We address the problem of integrity management in a virtualized environment. We introduce a formal integrity model for managing the integrity of arbitrary aspects of a virtualized system. Based on the model, we describe an architecture called PEV, which stands for protection, enforcement, and verification. The architecture generalizes the integrity management functions of the Trusted Platform Module (TPM) to cover not just software binaries, but also VMs, virtual devices, and a wide range of security policies. The architecture enables the verification of security compliance and enforcement of security policies. We describe a prototype implementation of the architecture based on the Xen hypervisor. We demonstrate the policy enforcement and compliance checking capabilities of our prototype through multiple use cases.", "Virtual machine monitors (VMMs) have been hailed as the basis for an increasing number of reliable or trusted computing systems. The Xen VMM is a relatively small piece of software -- a hypervisor -- that runs at a lower level than a conventional operating system in order to provide isolation between virtual machines: its size is offered as an argument for its trustworthiness. However, the management of a Xen-based system requires a privileged, full-blown operating system to be included in the trusted computing base (TCB). In this paper, we introduce our work to disaggregate the management virtual machine in a Xen-based system. We begin by analysing the Xen architecture and explaining why the status quo results in a large TCB. We then describe our implementation, which moves the domain builder, the most important privileged component, into a minimal trusted compartment. We illustrate how this approach may be used to implement \"trusted virtualisation\" and improve the security of virtual TPM implementations. Finally, we evaluate our approach in terms of the reduction in TCB size, and by performing a security analysis of the disaggregated system.", "This paper presents hardware and software mechanisms to enable concurrent direct network access (CDNA) by operating systems running within a virtual machine monitor. In a conventional virtual machine monitor, each operating system running within a virtual machine must access the network through a software-virtualized network interface. These virtual network interfaces are multiplexed in software onto a physical network interface, incurring significant performance overheads. The CDNA architecture improves networking efficiency and performance by dividing the tasks of traffic multiplexing, interrupt delivery, and memory protection between hardware and software in a novel way. The virtual machine monitor delivers interrupts and provides protection between virtual machines, while the network interface performs multiplexing of the network data. In effect, the CDNA architecture provides the abstraction that each virtual machine is connected directly to its own network interface. Through the use of CDNA, many of the bottlenecks imposed by software multiplexing can be eliminated without sacrificing protection, producing substantial efficiency improvements", "While industry is making rapid advances in system virtualization, for server consolidation and for improving system maintenance and management, it has not yet become clear how virtualization can contribute to the performance of high end systems. In this context, this paper addresses a key issue in system virtualization - how to efficiently virtualize I O subsystems and peripheral devices. We have developed a novel approach to I O virtualization, termed self-virtualized devices, which improves I O performance by off loading select virtualization functionality onto the device. This permits guest virtual machines to more efficiently (i.e., with less overhead and reduced latency) interact with the virtualized device. The concrete instance of such a device developed and evaluated in this paper is a self-virtualized network interface (SV-NIC), targeting the high end NICs used in thehigh performance domain. The SV-NIC (1) provides virtual interfaces (VIFs) to guest virtual machines for an underlying physical device, the network interface, (2) manages the wayin which the device's physical resources are used by guest operating systems, and (3) provides high performance, low overhead network access to guest domains. Experimental results are attained in a prototyping environment using an IXP 2400-based ethernet board as a programmable network device. The SV-NIC scales to large numbers of VIFs and guests, and offers VIFs with 77 higher throughput and 53 less latency compared to the current standard virtualized device implementations on hyper visor-based platforms.", "Virtual machines are widely accepted as a promising basis for building secure systems. However, while virtual machines offer effective mechanisms to create isolated environments, mechanisms that offer controlled interaction among VMs are immature. Some VM systems include flexible policy models and some enable MLS enforcement, but the flexible use of policy to control VM interactions has not been developed. In this paper, we propose an architecture that enables administrators to configure virtual machines to satisfy prescribed security goals. We describe the design and implementation of such an architecture using SELinux, Xen and IPsec as the tools to express and enforce policies at the OS, VM and Network layers, respectively. We develop a web application using our architecture and show that we can configure application VMs in such a way that we can verify the enforcement of the security goals of those applications.", "Virtual Machine (VM) environments (e.g., VMware and Xen) are experiencing a resurgence of interest for diverse uses including server consolidation and shared hosting. An application's performance in a virtual machine environment can differ markedly from its performance in a non-virtualized environment because of interactions with the underlying virtual machine monitor and other virtual machines. However, few tools are currently available to help debug performance problems in virtual machine environments.In this paper, we present Xenoprof, a system-wide statistical profiling toolkit implemented for the Xen virtual machine environment. The toolkit enables coordinated profiling of multiple VMs in a system to obtain the distribution of hardware events such as clock cycles and cache and TLB misses. The toolkit will facilitate a better understanding of performance characteristics of Xen's mechanisms allowing the community to optimize the Xen implementation.We use our toolkit to analyze performance overheads incurred by networking applications running in Xen VMs. We focus on networking applications since virtualizing network I O devices is relatively expensive. Our experimental results quantify Xen's performance overheads for network I O device virtualization in uni- and multi-processor systems. With certain Xen configurations, networking workloads in the Xen environment can suffer significant performance degradation. Our results identify the main sources of this overhead which should be the focus of Xen optimization efforts. We also show how our profiling toolkit was used to uncover and resolve performance bugs that we encountered in our experiments which caused unexpected application behavior.", "Virtual Machine Monitors (VMMs) are gaining popularity in enterprise environments as a software-based solution for building shared hardware infrastructures via virtualization. In this work, using the Xen VMM, we present a light weight monitoring system for measuring the CPU usage of different virtual machines including the CPU overhead in the device driver domain caused by I O processing on behalf of a particular virtual machine. Our performance study attempts to quantify and analyze this overhead for a set of I O intensive workloads." ] }
1103.0172
2949841717
Traditional spatial queries return, for a given query object @math , all database objects that satisfy a given predicate, such as epsilon range and @math -nearest neighbors. This paper defines and studies inverse spatial queries, which, given a subset of database objects @math and a query predicate, return all objects which, if used as query objects with the predicate, contain @math in their result. We first show a straightforward solution for answering inverse spatial queries for any query predicate. Then, we propose a filter-and-refinement framework that can be used to improve efficiency. We show how to apply this framework on a variety of inverse queries, using appropriate space pruning strategies. In particular, we propose solutions for inverse epsilon range queries, inverse @math -nearest neighbor queries, and inverse skyline queries. Our experiments show that our framework is significantly more efficient than naive approaches.
Mutual-pruning approaches such as @cite_8 @cite_6 @cite_7 use other points to prune a given index entry @math . TPL @cite_7 is the most general and efficient approach. It uses an R-tree to compute a nearest neighbor ranking of the query point @math . The key idea is to iteratively construct Voronoi hyper-planes around @math using the retrieved neighbors. TPL can be used for inverse @math NN queries where @math , by simply performing a reverse @math NN query for each query point and then intersecting the results (i.e., the brute-force approach).
{ "cite_N": [ "@cite_7", "@cite_6", "@cite_8" ], "mid": [ "1608524621", "2008908114", "96373761" ], "abstract": [ "Given a point q, a reverse k nearest neighbor (RkNN) query retrieves all the data points that have q as one of their k nearest neighbors. Existing methods for processing such queries have at least one of the following deficiencies: (i) they do not support arbitrary values of k (ii) they cannot deal efficiently with database updates, (iii) they are applicable only to 2D data (but not to higher dimensionality), and (iv) they retrieve only approximate results. Motivated by these shortcomings, we develop algorithms for exact processing of RkNN with arbitrary values of k on dynamic multidimensional datasets. Our methods utilize a conventional data-partitioning index on the dataset and do not require any pre-computation. In addition to their flexibility, we experimentally verify that the proposed algorithms outperform the existing ones even in their restricted focus.", "Reverse Nearest Neighbor (RNN) queries are of particular interest in a wide range of applications such as decision support systems, profile based marketing, data streaming, document databases, and bioinformatics. The earlier approaches to solve this problem mostly deal with two dimensional data. However most of the above applications inherently involve high dimensions and high dimensional RNN problem is still unexplored. In this paper, we propose an approximate solution to answer RNN queries in high dimensions. Our approach is based on the strong correlation in practice between k-NN and RNN. It works in two phases. In the first phase the k-NN of a query point is found and in the next phase they are further analyzed using a novel type of query Boolean Range Query (BRQ). Experimental results show that BRQ is much more efficient than both NN and range queries, and can be effectively used to answer RNN queries. Performance is further improved by running multiple BRQ simultaneously. The proposed approach can also be used to answer other variants of RNN queries such as RNN of order k, bichromatic RNN, and Matching Query which has many applications of its own. Our technique can efficiently answer NN, RNN, and its variants with approximately same number of I O as running a NN query.", "In this paper we propose an algorithm for answering reverse nearest neighbor RNN queries a problem formulated only recently This class of queries is strongly related to that of nearest neighbor NN queries although the two are not necessarily complementary Unlike nearest neighbor queries RNN queries nd the set of database points that have the query point as the nearest neighbor There is no other proposal we are aware of that provides an algorithmic approach to answer RNN queries The earlier approach for RNN queries KM is based on the pre computation of neighborhood information that is organized in terms of auxiliary data structures It can be argued that the pre computation of the RNN information for all points in the database can be too restrictive In the case of dynamic databases insert and update operations are expensive and can lead to modi cations of large parts of the auxiliary data structures Also answers to RNN queries for a set of data points depend on the number of dimensions taken in considerations when initializing the data structures We propose an algorithmic approach that is exible enough to support a larger class of RNN queries and in order to support them we also extend the current method of nearest neighbor search to that of conditional nearest neighbor" ] }
1103.0903
2098342458
We revisit experimental studies performed by Ekman on dead-water (Ekman, 1904) using modern techniques in order to present new insights on this peculiar phenomenon. We extend its description to more general situations such as a three-layer fluid or a linearly stratified fluid in presence of a pycnocline, showing the robustness of dead-water phenomenon. We observe large amplitude nonlinear internal waves which are coupled to the boat dynamics, and we emphasize that the modeling of the wave-induced drag requires more analysis, taking into account nonlinear effects. Dedicated to Fridtj ¨ of Nansen born 150 yr ago (10 October 1861). In this paper, we present detailed experimental results on the dead-water phenomenon as shown in the video by (2008). The material is organized as fol- lows. In the remaining of this section, we briefly review the different studies of this phenomenon, either directly related to Ekman's work or only partially connected to it. Section 2 presents the experimental set-up. The case of a two-layer fluid is addressed in Sect. 3, followed by the case with a three-layer fluid in Sect. 4. The more realistic stratification with a pycnocline above a linearly stratified fluid is finally discussed in Sect. 5. Our conclusions, and suggestions for future work are presented in Sect. 6.
@cite_1 took advantage of the dead-water effect to study the effects of interfacial waves on wind induced surface waves. The study relates the statistical properties of surface waves to the currents induced by internal waves.
{ "cite_N": [ "@cite_1" ], "mid": [ "2025021278" ], "abstract": [ "Internal waves were generated by a ship using the ‘dead water’ effect in areas where the water contains a strong near-surface density gradient. The effects of these internal waves on wind waves were examined. The principal measurements were slope statistics of the wind waves and horizontal currents in the internal waves. The effects on the wind waves were always observable from an aircraft; however, in measurements made only along the ship's track the effects of the internal waves were not always readily distinguishable from other factors that influence the wind wave field. By using statistical techniques, relationships have been established between the wind waves, the internal waves, and the wind velocity. The principal finding is that the wind wave field is relatively more sensitive to internal wave currents at low wind speeds than at high wind speeds. Numerical values are given." ] }
1103.0903
2098342458
We revisit experimental studies performed by Ekman on dead-water (Ekman, 1904) using modern techniques in order to present new insights on this peculiar phenomenon. We extend its description to more general situations such as a three-layer fluid or a linearly stratified fluid in presence of a pycnocline, showing the robustness of dead-water phenomenon. We observe large amplitude nonlinear internal waves which are coupled to the boat dynamics, and we emphasize that the modeling of the wave-induced drag requires more analysis, taking into account nonlinear effects. Dedicated to Fridtj ¨ of Nansen born 150 yr ago (10 October 1861). In this paper, we present detailed experimental results on the dead-water phenomenon as shown in the video by (2008). The material is organized as fol- lows. In the remaining of this section, we briefly review the different studies of this phenomenon, either directly related to Ekman's work or only partially connected to it. Section 2 presents the experimental set-up. The case of a two-layer fluid is addressed in Sect. 3, followed by the case with a three-layer fluid in Sect. 4. The more realistic stratification with a pycnocline above a linearly stratified fluid is finally discussed in Sect. 5. Our conclusions, and suggestions for future work are presented in Sect. 6.
In a slightly different perspective, @cite_0 demonstrated that an object accelerating in a stratified fluid generates oblique and transverse internal waves, the latter can be decomposed as a sum of baroclinic modes with the lowest mode always present. @cite_4 further showed through experiments that in such a dynamical evolution, the baroclinic modes generated propagate independently of each other, although nonlinear effects must become important when the amplitude of the internal waves is increasing.
{ "cite_N": [ "@cite_0", "@cite_4" ], "mid": [ "2019611332", "108453617" ], "abstract": [ "Many papers study the steady wave system around bodies moving in thermoclines but little attention has been given to unsteady wave systems. This paper concentrates on the unsteady wave systems around accelerating bodies in thermoclines. The wave shapes are calculated using a theory derived from a dispersion relation based on an exp-tanh density profile. All modes of oscillation can be determined and it is shown that for the lowest mode both oblique and transverse waves occur whereas for the higher modes the presence of transverse waves depends on the background conditions and on the speed of the body. Cauchy-Poisson impulsive start waves are included. The theoretical wave shapes compare quite well with those calculated using finite-difference formulations of the full Navier-Stokes equations when a body accelerates from rest.", "A bench study of the amplitudes, mode composition, and phase structure of the internal waves generated by a vertical cylinder in the presence of a near-surface pycnocline has been performed; the pycnocline took the form of a stratified fluid layer located between two quasi-homogeneous layers of thicknesses h1 and h2=2h1. In the experiments, the cylinder traveled at velocities critical with respect to internal wave generation. Different cases of model submergence relative to the pycnocline are considered. The dependence of the mode structure and the amplitude-phase characteristics of the forced internal waves on the body velocity and its relative submergence is analyzed. The parameters of both steady and unsteady wave systems are studied." ] }
1103.0041
2950038108
In Combinatorial Public Projects, there is a set of projects that may be undertaken, and a set of self-interested players with a stake in the set of projects chosen. A public planner must choose a subset of these projects, subject to a resource constraint, with the goal of maximizing social welfare. Combinatorial Public Projects has emerged as one of the paradigmatic problems in Algorithmic Mechanism Design, a field concerned with solving fundamental resource allocation problems in the presence of both selfish behavior and the computational constraint of polynomial-time. We design a polynomial-time, truthful-in-expectation, (1-1 e)-approximation mechanism for welfare maximization in a fundamental variant of combinatorial public projects. Our results apply to combinatorial public projects when players have valuations that are matroid rank sums (MRS), which encompass most concrete examples of submodular functions studied in this context, including coverage functions, matroid weighted-rank functions, and convex combinations thereof. Our approximation factor is the best possible, assuming P != NP. Ours is the first mechanism that achieves a constant factor approximation for a natural NP-hard variant of combinatorial public projects.
Combinatorial Public Projects, in particular its variant, was first introduced by Papadimitriou, Schapira and Singer @cite_25 . They show that no deterministic truthful mechanism for exact CPP with submodular valuations can guarantee better than a @math approximation to the optimal social welfare. The non-strategic version of the problem, on the other hand, is equivalent to maximizing a submodular function subject to a cardinality constraint, and admits a @math -approximation algorithm due to Nemhauser, Wolsey and Fisher @cite_14 , and this is optimal @cite_20 assuming @math .
{ "cite_N": [ "@cite_14", "@cite_25", "@cite_20" ], "mid": [ "2757107770", "2141502056", "" ], "abstract": [ "LetN be a finite set andz be a real-valued function defined on the set of subsets ofN that satisfies z(S)+z(T)gez(SxcupT)+z(SxcapT) for allS, T inN. Such a function is called submodular. We consider the problem maxSsubN a(S):|S|leK,z(S) submodular . Several hard combinatorial optimization problems can be posed in this framework. For example, the problem of finding a maximum weight independent set in a matroid, when the elements of the matroid are colored and the elements of the independent set can have no more thanK colors, is in this class. The uncapacitated location problem is a special case of this matroid optimization problem. We analyze greedy and local improvement heuristics and a linear programming relaxation for this problem. Our results are worst case bounds on the quality of the approximations. For example, whenz(S) is nondecreasing andz(0) = 0, we show that a ldquogreedyrdquo heuristic always produces a solution whose value is at least 1 –[(K – 1) K] K times the optimal value. This bound can be achieved for eachK and has a limiting value of (e – 1) e, where e is the base of the natural logarithm.", "The central problem in computational mechanism design is the tension between incentive compatibility and computational efficiency. We establish the first significant approximability gap between algorithms that are both truthful and computationally-efficient, and algorithms that only achieve one of these two desiderata. This is shown in the context of a novel mechanism design problem which we call the combinatorial public project problem (cppp). cpppis an abstraction of many common mechanism design situations, ranging from elections of kibbutz committees to network design.Our result is actually made up of two complementary results -- one in the communication-complexity model and one in the computational-complexity model. Both these hardness results heavily rely on a combinatorial characterization of truthful algorithms for our problem. Our computational-complexity result is one of the first impossibility results connecting mechanism design to complexity theory; its novel proof technique involves an application of the Sauer-Shelah Lemma and may be of wider applicability, both within and without mechanism design.", "" ] }
1103.0041
2950038108
In Combinatorial Public Projects, there is a set of projects that may be undertaken, and a set of self-interested players with a stake in the set of projects chosen. A public planner must choose a subset of these projects, subject to a resource constraint, with the goal of maximizing social welfare. Combinatorial Public Projects has emerged as one of the paradigmatic problems in Algorithmic Mechanism Design, a field concerned with solving fundamental resource allocation problems in the presence of both selfish behavior and the computational constraint of polynomial-time. We design a polynomial-time, truthful-in-expectation, (1-1 e)-approximation mechanism for welfare maximization in a fundamental variant of combinatorial public projects. Our results apply to combinatorial public projects when players have valuations that are matroid rank sums (MRS), which encompass most concrete examples of submodular functions studied in this context, including coverage functions, matroid weighted-rank functions, and convex combinations thereof. Our approximation factor is the best possible, assuming P != NP. Ours is the first mechanism that achieves a constant factor approximation for a natural NP-hard variant of combinatorial public projects.
Buchfuhrer, Schapira and Singer @cite_0 explored approximation algorithms and truthful mechanisms for CPP with various classes of valuations in the submodular hierarchy. The most relevant result of @cite_0 to our paper is a lower-bound of @math on truthful mechanisms for the exact variant of CPP with coverage valuations --- a class of valuations for which our mechanism for flexible CPP obtains a @math approximation.
{ "cite_N": [ "@cite_0" ], "mid": [ "2031122640" ], "abstract": [ "The Combinatorial Public Projects Problem (CPPP) is an abstraction of resource allocation problems in which agents have preferences over alternatives, and an outcome that is to be collectively shared by the agents is chosen so as to maximize the social welfare. We explore CPPP from both computational perspective and a mechanism design perspective. We examine CPPP in the hierarchy of complement free (subadditive) valuation classes and present positive and negative results for both unrestricted and truthful algorithms." ] }
1103.0041
2950038108
In Combinatorial Public Projects, there is a set of projects that may be undertaken, and a set of self-interested players with a stake in the set of projects chosen. A public planner must choose a subset of these projects, subject to a resource constraint, with the goal of maximizing social welfare. Combinatorial Public Projects has emerged as one of the paradigmatic problems in Algorithmic Mechanism Design, a field concerned with solving fundamental resource allocation problems in the presence of both selfish behavior and the computational constraint of polynomial-time. We design a polynomial-time, truthful-in-expectation, (1-1 e)-approximation mechanism for welfare maximization in a fundamental variant of combinatorial public projects. Our results apply to combinatorial public projects when players have valuations that are matroid rank sums (MRS), which encompass most concrete examples of submodular functions studied in this context, including coverage functions, matroid weighted-rank functions, and convex combinations thereof. Our approximation factor is the best possible, assuming P != NP. Ours is the first mechanism that achieves a constant factor approximation for a natural NP-hard variant of combinatorial public projects.
Most recently, Dobzinski @cite_11 showed two lower bounds for CPP in the value oracle model: A lower bound of @math on universally truthful mechanisms for flexible CPP with submodular valuations, and a lower bound of @math on truthful-in-expectation mechanisms for CPP with submodular valuations. We note that the latter was the first unconditional lower bound on truthful-in-expectation mechanisms.
{ "cite_N": [ "@cite_11" ], "mid": [ "2078273056" ], "abstract": [ "We show that every universally truthful randomized mechanism for combinatorial auctions with submodular valuations that provides an approximation ratio of m1 2 -e must use exponentially many value queries, where m is the number of items. In contrast, ignoring incentives there exist constant ratio approximation algorithms for this problem. Our approach is based on a novel direct hardness technique that completely skips the notoriously hard step of characterizing truthful mechanisms. The characterization step was the main obstacle for proving impossibility results in algorithmic mechanism design so far. We demonstrate two additional applications of our new technique: (1) an impossibility result for universally-truthful polynomial time flexible combinatorial public projects and (2) an impossibility result for truthful-in-expectation mechanisms for exact combinatorial public projects. The latter is the first result that bounds the power of polynomial-time truthful in expectation mechanisms in any setting." ] }
1102.5554
1985356378
We present a new type of the EnKF for data assimilation in spatial models that uses diagonal approximation of the state covariance in the wavelet space to achieve adaptive localization. The efficiency of the new method is demonstrated on an example.
Diagonal approximation of the covariance in the frequency space was proposed for weather fields @cite_17 . Wavelets are well suited for approximation of meteorological fields @cite_16 , and the diagonal approximation was extended to wavelet spaces @cite_12 @cite_14 @cite_0 . The Fourier domain KF @cite_3 is the KF applied to independent frequency modes. The Laplace operator represented by a diagonal matrix in the frequency space was used for a fast OSI @cite_5 . The inverse of the Laplace operator was proposed as a covariance model @cite_4 , but higher negative powers @cite_5 yield better distributions.
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_3", "@cite_0", "@cite_5", "@cite_16", "@cite_12", "@cite_17" ], "mid": [ "", "2091538160", "", "2152498685", "", "2066917788", "2116629292", "2163772203" ], "abstract": [ "", "This work examines which generalized covariance function when used in the stochastic approach produces the flattest possible estimate of an unknown function that is consistent with the data. Such an estimate is the plainest possible continuous function, thus in a sense eliminating details that are irrelevant or unsupported by data. The answer is found from the solution of the following variational problem: Determine the function that reproduces the data, has the smallest gradient (in the square norm sense), and has a gradient that vanishes at large distances from the observations. The generalized covariance functions are shown to be the Green's functions for the free-space Laplace equation: the linear distance, in one dimension; the logarithmic distance in two dimensions; and the inverse distance in three dimensions. It is demonstrated that they are appropriate covariance functions for intrinsic random fields, a modification is proposed to facilitate numerical implementation, and a couple of examples are presented to illustrate the applicability of the methodology.", "", "Background error covariances can be estimated from an ensemble of forecast differences. The finite size of the ensemble induces a sampling noise in the calculated statistics. It is shown formally that a wavelet diagonal approach amounts to locally averaging the correlations, and its ability to spatially filter this sampling noise is thus investigated experimentally. This is first studied in a simple analytical one dimensional framework. The capacity of a wavelet diagonal approach to model the scale variations over the domain is illustrated. Moreover, the sampling noise appears to be better filtered than when only using a Schur filter, in particular for small ensembles. The filtering properties are then illustrated for an ensemble of M´", "", "Abstract Orthonormal wavelet analysis (OWA) is a special form of wavelet analysis, especially suitable for analyzing spatial structures, such as atmospheric fields. For this purpose, OWA is much more efficient and accurate than the nonorthogonal wavelet transform (WT), which was introduced to the meteorological community recently and which is more suitable for time series analysis. Whereas the continuous WT is strictly correct only for infinite domains, OWA is derived from periodizing and discretizing the infinite-domain case and so is correct for periodic boundary conditions. Unlike Fourier spectra, OWA is not shift invariant. Nor is it equivariant like the WT; that is, the OWA output does not shift as its input shifts. Two remedies are to combine all possible shifts, known as the overcomplete, nonorthogonal shift equivariant WT, or else to use a “best shift,” known as best shift wavelet analysis. Although shift invariant and orthonormal w.r.t. arbitrary inputs, the latter’s optimization generally depend...", "The use of orthogonal wavelets for the representation of background error covariances over a limited area is studied. Each wavelet function contains both information on position and information on scale: using a diagonal correlation matrix in wavelet space thus gives the possibility of representing the local variations of correlation scale. To this end, a generalized family of orthogonal Meyer wavelets that are not restricted to dyadic domains (i.e., powers of 2) is introduced. A three-bases approach is used, which allows one to take advantage of the respective properties of the spectral, wavelet, and gridpoint spaces. While the implied local anisotropies are relatively small, the local changes in the two-dimensional length scale are rather well represented.", "Statistical and balance features of forecast errors are generally incorporated in the background constraint of variational data assimilation. Forecast error covariances are here estimated with a spectral approach and from a set of forecast differences; autocovariances are calculated with a nonseparable scheme, and multiple linear regressions are used in the formulation of cross covariances. Such an approach was first developed for global models; it is here adapted to ALADIN, a bi-Fourier high-resolution limited-area model, and extended to a multivariate study of humidity forecast errors. Results for autocovariances confirm the importance of nonseparability, in terms of both vertical variability of horizontal correlations and dependence of vertical correlations with horizontal scale; high-resolution spatial correlations are obtained, which should enable a high-resolution analysis. Moreover nonnegligible relationships are found between forecast errors of humidity and those of mass and wind fields." ] }
1102.4411
2122057349
Consider the following unequal error protection scenario. One special message, dubbed the “red alert” message, is required to have an extremely small probability of missed detection. The remainder of the messages must keep their average probability of error and probability of false alarm below a certain threshold. The goal then is to design a codebook that maximizes the error exponent of the red alert message while ensuring that the average probability of error and probability of false alarm go to zero as the blocklength goes to infinity. This red alert exponent has previously been characterized for discrete memoryless channels. This paper completely characterizes the optimal red alert exponent for additive white Gaussian noise channels with block power constraints.
In @cite_14 , Borade, Nakibo g lu, and Zheng study bit''-wise and message''-wise unequal error protection (UEP) problems and error exponents. The red alert problem is a message-wise UEP problem in which one message is special and the remaining messages are standard. While @cite_14 focuses on general DMCs near capacity, Lemma 1 of that paper develops a general sharp bound on the red alert exponent for DMCs at any rate below capacity (both with and without feedback). Specializing to the exponent achieved at capacity, let @math denote the input alphabet, @math the channel transition matrix, and @math the capacity-achieving output distribution of the DMC. Then, the optimal red alert exponent at capacity is where @math is the KL divergence. We also mention recent work by Nakibo g lu @cite_1 @cite_5 that considers the generalization where a strictly positive error exponent is required of the standard messages.
{ "cite_N": [ "@cite_5", "@cite_14", "@cite_1" ], "mid": [ "", "2163446469", "2137871036" ], "abstract": [ "", "An information-theoretic framework for unequal error protection is developed in terms of the exponential error bounds. The fundamental difference between the bit-wise and message-wise unequal error protection ( UEP) is demonstrated, for fixed-length block codes on discrete memoryless channels (DMCs) without feedback. Effect of feedback is investigated via variable-length block codes. It is shown that, feedback results in a significant improvement in both bit-wise and message-wise UEPs (except the single message case for missed detection). The distinction between false-alarm and missed-detection formalizations for message-wise UEP is also considered. All results presented are at rates close to capacity.", "The bit-wise unequal error protection problem, for the case when the number of groups of bits l is fixed, is considered for variable-length block codes with feedback. An encoding scheme based on fixed-length block codes with erasures is used to establish inner bounds to the achievable performance for finite expected decoding time. A new technique for bounding the performance of variable-length block codes is used to establish outer bounds to the performance for a given expected decoding time. The inner and the outer bounds match one another asymptotically and characterize the achievable region of rate-exponent vectors, completely. The single-message message-wise unequal error protection problem for variable-length block codes with feedback is also solved as a necessary step on the way." ] }
1102.4411
2122057349
Consider the following unequal error protection scenario. One special message, dubbed the “red alert” message, is required to have an extremely small probability of missed detection. The remainder of the messages must keep their average probability of error and probability of false alarm below a certain threshold. The goal then is to design a codebook that maximizes the error exponent of the red alert message while ensuring that the average probability of error and probability of false alarm go to zero as the blocklength goes to infinity. This red alert exponent has previously been characterized for discrete memoryless channels. This paper completely characterizes the optimal red alert exponent for additive white Gaussian noise channels with block power constraints.
The fundamental mechanism through which high red alert exponents are achieved is a binary hypothesis test. By designing the induced distributions at the output of the channel to be far apart as measured by KL divergence, we can distinguish whether the red alert or some standard codeword was sent. The test threshold is biased to minimize the probability of missed detection and is analyzed via an application of Stein's Lemma. This sort of biased hypothesis test occurs in numerous other communication settings with feedback, such as @cite_16 @cite_8 @cite_13 and, as mentioned earlier, these codes are also used as a component in streaming data systems (see, for instance, @cite_3 @cite_19 @cite_15 @cite_2 ). There is also a rich literature on the interplay between hypothesis testing and information theory, which we cannot do justice to here (see, for instance, @cite_0 @cite_4 @cite_18 ).
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_8", "@cite_3", "@cite_0", "@cite_19", "@cite_2", "@cite_15", "@cite_16", "@cite_13" ], "mid": [ "1999805402", "2021547642", "2115827190", "", "2034597819", "2164985961", "2108078432", "2107543180", "2040877654", "" ], "abstract": [ "The authors present the answer to a so-called converse problem by giving the explicit form of the power exponent. This strong converse should be regarded as completing a weak converse previously obtained by R.E. Blahut (1974). >", "A new class of statistical problems is introduced, involving the presence of communication constraints on remotely collected data. Bivariate hypothesis testing, H_ 0 : P_ XY against H_ 1 : P_ = XY , is considered when the statistician has direct access to Y data but can be informed about X data only at a preseribed finite rate R . For any fixed R the smallest achievable probability of an error of type 2 with the probability of an error of type 1 being at most is shown to go to zero with an exponential rate not depending on as the sample size goes to infinity. A single-letter formula for the exponent is given when P_ = XY = P_ X P_ Y (test against independence), and partial results are obtained for general P_ = XY . An application to a search problem of Chernoff is also given.", "By an extension of Gallager's bounding methods, exponential error bounds applicable to coding schemes involving erasures, variable-size lists, and decision feedback are obtained. The bounds are everywhere the tightest known.", "", "The testing of binary hypotheses is developed from an information-theoretic point of view, and the asymptotic performance of optimum hypothesis testers is developed in exact analogy to the asymptotic performance of optimum channel codes. The discrimination, introduced by Kullback, is developed in a role analogous to that of mutual information in channel coding theory. Based on the discrimination, an error-exponent function e(r) is defined. This function is found to describe the behavior of optimum hypothesis testers asymptotically with block length. Next, mutual information is introduced as a minimum of a set of discriminations. This approach has later coding significance. The channel reliability-rate function E(R) is defined in terms of discrimination, and a number of its mathematical properties developed. Sphere-packing-like bounds are developed in a relatively straightforward and intuitive manner by relating e(r) and E (R) . This ties together the aforementioned developments and gives a lower bound in terms of a hypothesis testing model. The result is valid for discrete or continuous probability distributions. The discrimination function is also used to define a source code reliability-rate function. This function allows a simpler proof of the source coding theorem and also bounds the code performance as a function of block length, thereby providing the source coding analog of E (R) .", "We show how to exploit a noisy feedback link to implement high-reliability communication. We specify a variable-length coding strategy that achieves the error exponent (in delay) of erasure decoding using any noisy feedback channel which has a positive zero-rate random coding error exponent. Building on this result, we give a second approach that, depending only on the capacity of the feedback link, achieves an error exponent up to half of the Burnashev exponent - the maximum exponent that can be achieved with a noiseless feedback link. The resulting exponent can be far larger than the exponent of erasure decoding, particularly at rates close to capacity", "In this paper we bound the reliability function of decoding with errors and erasures for a streaming multiple-access channel with feedback. We show that, subject to an arbitrarily small bound on the probability of erasure, the best known lower bound on the reliability function (i.e., achievable error exponent) for the single-user version of our problem can also be achieved in the multi-user setting for high sum-rates. In other words, at high rates the interference of another user need not decrease the achievable error exponent of either.", "For output-symmetric discrete memoryless channels (DMCs) at even moderately high rates, fixed-block-length communication systems show no improvements in their error exponents with feedback. This paper studies systems with fixed end-to-end delay and shows that feedback generally provides dramatic gains in the error exponents. A new upper bound (the uncertainty-focusing bound) is given on the probability of symbol error in a fixed-delay communication system with feedback. This bound turns out to have a form similar to Viterbi's bound used for the block error probability of convolutional codes as a function of the fixed constraint length. The uncertainty-focusing bound is shown to be asymptotically achievable with noiseless feedback for erasure channels as well as for any output-symmetric DMC that has strictly positive zero-error capacity. Furthermore, it can be achieved in a delay-universal (anytime) fashion even if the feedback itself is delayed by a small amount. Finally, it is shown that for end-to-end delay, it is generally possible at high rates to beat the sphere-packing bound for general DMCs - thereby providing a counterexample to a conjecture of Pinsker.", "The presence of a feedback channel makes possible a variety of sequential transmission procedures, each of which can be classified as either a block-transmission or a continuous-transmission scheme according to the way in which information is encoded for transmission over a noisy forward channel. A sequential continuous-transmission system employing a binary symmetric forward channel (but which is suitable for use with any discrete memoryless forward channel) and a noiseless feedback channel is described. Its error exponent is shown to be substantially greater than the optimum block-code error exponent at all transmission rates less than channel capacity. The average value and the first-order probability distribution of the effective constraint length, found by simulating the system on an IBM 709 computer, are also given.", "" ] }
1102.4326
2151748886
Privacy policies often place requirements on the purposes for which a governed entity may use personal information. For example, regulations, such as HIPAA, require that hospital employees use medical information for only certain purposes, such as treatment. Thus, using formal or automated methods for enforcing privacy policies requires a semantics of purpose requirements to determine whether an action is for a purpose or not. We provide such a semantics using a formalism based on planning. We model planning using a modified version of Markov Decision Processes, which exclude redundant actions for a formal definition of redundant. We use the model to formalize when a sequence of actions is only for or not for a purpose. This semantics enables us to provide an algorithm for automating auditing, and to describe formally and compare rigorously previous enforcement methods.
The works most similar to ours in approach have been on , which requires that the amount of information used in granting a request for access should be as little as possible while still achieving the purpose behind the request. Massacci, Mylopoulos, and Zannone define minimal disclosure for Hippocratic databases @cite_17 . Barth, Mitchell, Datta, and Sundaram study minimal disclosure in the context of workflows @cite_10 . They model a workflow as meeting a utility goal if it satisfies a temporal logic formula. Minimizing the amount of information disclosed is similar to an agent maximizing his reward and thereby not performing actions that have costs but no benefits. However, in addition to having different research goals, we consider several factors that these works do not, including quantitative purposes that are satisfied to varying degrees and probabilistic behavior resulting in actions being for a purpose despite the purpose not being achieved.
{ "cite_N": [ "@cite_10", "@cite_17" ], "mid": [ "2105401205", "1971433976" ], "abstract": [ "We propose an abstract model of business processes for the purpose of (i) evaluating privacy policy in light of the goals of the process and (ii) developing automated support for privacy policy compliance and audit. In our model, agents that send and receive tagged personal information are assigned organizational roles and responsibilities. We present approaches and algorithms for determining whether a business process design simultaneously achieves privacy and the goals of the organization (utility). The model also allows us to develop a notion of minimal exposure of personal information, for a given process. We investigate the problem of auditing with inexact information and develop methods to identify a set of potentially culpable individuals when privacy is breached. The audit methods draw on traditional causality concepts to reduce the effort needed to search audit logs for irresponsible actions.", "The protection of customer privacy is a fundamental issue in today’s corporate marketing strategies. Not surprisingly, many research efforts have proposed new privacy-aware technologies. Among them, Hippocratic databases offer mechanisms for enforcing privacy rules in database systems for inter-organizational business processes (also known as virtual organizations). This paper extends these mechanisms to allow for hierarchical purposes, distributed authorizations and minimal disclosure supporting the business processes of virtual organizations that want to offer their clients a number of ways to fulfill a service. Specifically, we use a goal-oriented approach to analyze privacy policies of the enterprises involved in a business process. On the basis of the purpose hierarchy derived through a goal refinement process, we provide algorithms for determining the minimum set of authorizations needed to achieve a service. This allows us to automatically derive access control policies for an inter-organizational business process from the collection of privacy policies associated with different participating enterprises. By using effective on-line algorithms, the derivation of such minimal information can also be done on-the-fly by the customer wishing to access a service." ] }
1102.4326
2151748886
Privacy policies often place requirements on the purposes for which a governed entity may use personal information. For example, regulations, such as HIPAA, require that hospital employees use medical information for only certain purposes, such as treatment. Thus, using formal or automated methods for enforcing privacy policies requires a semantics of purpose requirements to determine whether an action is for a purpose or not. We provide such a semantics using a formalism based on planning. We model planning using a modified version of Markov Decision Processes, which exclude redundant actions for a formal definition of redundant. We use the model to formalize when a sequence of actions is only for or not for a purpose. This semantics enables us to provide an algorithm for automating auditing, and to describe formally and compare rigorously previous enforcement methods.
Work on understanding the components of privacy policies has shown that is a common component of privacy rules (e.g., @cite_40 @cite_22 ). Some languages for specifying access-control policies allow the purpose of an action to partially determine if access is granted @cite_21 @cite_25 @cite_35 @cite_18 . However, these languages do not give a formal semantics to the purposes. Instead they rely upon the system using the policy to determine whether an action is for a purpose or not.
{ "cite_N": [ "@cite_35", "@cite_18", "@cite_22", "@cite_21", "@cite_40", "@cite_25" ], "mid": [ "2089513810", "2160724118", "2107930672", "60261825", "2146989366", "1505817364" ], "abstract": [ "Privacy is a concept which received relatively little attention during the rapid growth and spread of information technology through the 1980's and 1990's. Design to make information easily accessible, without particular attention to issues such as whether an individual had a desire or right to control access to and use of particular information was seen as the more pressing goal. We believe that there will be an increasing awareness of a fundamental need to address privacy concerns in information technology, and that doing so will require an understanding of policies that govern information use as well as the development of technologies that can implement such policies. The research reported here describes our efforts to design a privacy management workbench which facilitates privacy policy authoring, implementation, and compliance monitoring. This case study highlights the work of identifying organizational privacy requirements, analyzing existing technology, on-going research to identify approaches that address these requirements, and iteratively designing and validating a prototype with target users for flexible privacy technologies.", "Today organizations do not have good ways of linking their written privacy policies with the implementation of those policies. To assist organizations in addressing this issue, our human-centered research has focused on understanding organizational privacy management needs, and, based on those needs, creating a usable and effective policy workbench called SPARCLE. SPARCLE will enable organizational users to enter policies in natural language, parse the policies to identify policy elements and then generate a machine readable (XML) version of the policy. In the future, SPARCLE will then enable mapping of policies to the organization's configuration and provide audit and compliance tools to ensure that the policy implementation operates as intended. In this paper, we present the strategies employed in the design and implementation of the natural language parsing capabilities that are part of the functional version of the SPARCLE authoring utility. We have created a set of grammars which execute on a shallow parser that are designed to identify the rule elements in privacy policy rules. We present empirical usability evaluation data from target organizational users of the SPARCLE system and highlight the parsing accuracy of the system with the organizations' privacy policies. The successful implementation of the parsing capabilities is an important step towards our goal of providing a usable and effective method for organizations to link the natural language version of privacy policies to their implementation, and subsequent verification through compliance auditing of the enforcement logs.", "Information practices that use personal, financial, and health-related information are governed by US laws and regulations to prevent unauthorized use and disclosure. To ensure compliance under the law, the security and privacy requirements of relevant software systems must properly be aligned with these regulations. However, these regulations describe stakeholder rules, called rights and obligations, in complex and sometimes ambiguous legal language. These \"rules\" are often precursors to software requirements that must undergo considerable refinement and analysis before they become implementable. To support the software engineering effort to derive security requirements from regulations, we present a methodology for directly extracting access rights and obligations from regulation texts. The methodology provides statement-level coverage for an entire regulatory document to consistently identify and infer six types of data access constraints, handle complex cross references, resolve ambiguities, and assign required priorities between access rights and obligations to avoid unlawful information disclosures. We present results from applying this methodology to the entire regulation text of the US Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule.", "", "Software requirements, rights, permissions, obligations, and operations of policy enforcing systems are often misaligned. Our goal is to develop tools and techniques that help requirements engineers and policy makers bring policies and system requirements into better alignment. Goals from requirements engineering are useful for distilling natural language policy statements into structured descriptions of these interactions; however, they are limited in that they are not easy to compare with one another despite sharing common semantic features. In this paper, we describe a process called semantic parameterization that we use to derive semantic models from goals mined from privacy policy documents. We present example semantic models that enable comparing policy statements and present a template method for generating natural language policy statements (and ultimately requirements) from unique semantic models. The semantic models are described by a context-free grammar called KTL that has been validated within the context of the most frequently expressed goals in over 100 Internet privacy policy documents. KTL is supported by a policy analysis tool that supports queries and policy statement generation.", "Foreword Preface Part I. Privacy and P3P 1. Introduction to P3P How P3P Works P3P-Enabling a Web Site Why Web Sites Adopt P3P 2. The Online Privacy Landscape Online Privacy Concerns Fair Information Practice Principles Privacy Laws Privacy Seals Chief Privacy Officers Privacy-Related Organizations 3. Privacy Technology Encryption Tools Anonymity and Pseudonymity Tools Filters Identity-Management Tools Other Tools 4. P3P History The Origin of the Idea The Internet Privacy Working Group W3C Launches the P3P Project The Evolving P3P Specification The Patent Issue Feedback from Europe Finishing the Specification Legal Implications Criticism Part II. P3P-Enabling Your Web Site 5. Overview and Options P3P-Enabled Web Site Components P3P Deployment Steps Creating a Privacy Policy Analyzing the Use of Cookies and Third-Party Content One Policy or Many? Generating a P3P Policy and Policy Reference File Helping User Agents Find Your Policy Reference File Combination Files Compact Policies The Safe Zone Testing Your Web Site 6. P3P Policy Syntax XML Syntax General Assertions Data-Specific Assertions The P3P Extension Mechanism The Policy File 7. Creating P3P Policies Gathering Information About Your Site's Data Practices Turning the Information You Gathered into a P3P Policy Writing a Compact Policy Avoiding Common Pitfalls 8. Creating and Referencing Policy Reference Files Creating a Policy Reference File Referencing a Policy Reference File P3P Policies in Policy Reference Files Changing Your P3P Policy or Policy Reference File Avoiding Common Pitfalls 9. Data Schemas Sets, Elements, and Structures Fixed and Variable Categories P3P Base Data Schema Writing a P3P Data Schema 10. P3P-Enabled Web Site Examples Simple Sites Third-Party Agents Third Parties with Their Own Policies Examples From Real Web Sites Part III. P3P Software and Design 11. P3P Vocabulary Design Issues Rating Systems and Vocabularies P3P Vocabulary Terms What's Not in the P3P Vocabulary 12. P3P User Agents and Other Tools P3P User Agents Other Types of P3P Tools P3P Specification Compliance Requirements 13. A P3P Preference Exchange Language (APPEL) APPEL Goals APPEL Evaluator Engines Writing APPEL Rule Sets Processing APPEL Rules Other Privacy Preference Languages 14. User Interface Case Studies Privacy Preference Settings User Agent Behavior Accessibility Privacy Part IV. Appendixes A. P3P Policy and Policy Reference File Syntax Quick Reference B. Configuring Web Servers to Include P3P Headers C. P3P in IE6 D. How to Create a Customized Privacy Import File for IE6 E. P3P Guiding Principles Index" ] }
1102.4326
2151748886
Privacy policies often place requirements on the purposes for which a governed entity may use personal information. For example, regulations, such as HIPAA, require that hospital employees use medical information for only certain purposes, such as treatment. Thus, using formal or automated methods for enforcing privacy policies requires a semantics of purpose requirements to determine whether an action is for a purpose or not. We provide such a semantics using a formalism based on planning. We model planning using a modified version of Markov Decision Processes, which exclude redundant actions for a formal definition of redundant. We use the model to formalize when a sequence of actions is only for or not for a purpose. This semantics enables us to provide an algorithm for automating auditing, and to describe formally and compare rigorously previous enforcement methods.
We borrow the notion of from Mackie's work on formalizing using counterfactual reasoning @cite_38 . In particular, Mackie defines a to be a non-redundant part of a sufficient explanation of an effect. Roughly speaking, we replace the causes with actions and the effect with a purpose. The extension to our semantics proposed in , may be seen as another instance of non-redundancy. This time, we replace the causes with purposes and the effect with an action. This suggests that for an action to be for a purpose, we expect both that the action was non-redundant for improving that purpose and that the purpose was non-redundant in motivating the action. That is, we expect planning to be parsimonious.
{ "cite_N": [ "@cite_38" ], "mid": [ "1632653496" ], "abstract": [ "Studies causation both as a concept and as it is 'in the objects.' Offers new accounts of the logic of singular causal statements, the form of causal regularities, the detection of causal relationships, the asymmetry of cause and effect, and necessary connection, and it relates causation to functional and statistical laws and to teleology." ] }
1102.4326
2151748886
Privacy policies often place requirements on the purposes for which a governed entity may use personal information. For example, regulations, such as HIPAA, require that hospital employees use medical information for only certain purposes, such as treatment. Thus, using formal or automated methods for enforcing privacy policies requires a semantics of purpose requirements to determine whether an action is for a purpose or not. We provide such a semantics using a formalism based on planning. We model planning using a modified version of Markov Decision Processes, which exclude redundant actions for a formal definition of redundant. We use the model to formalize when a sequence of actions is only for or not for a purpose. This semantics enables us to provide an algorithm for automating auditing, and to describe formally and compare rigorously previous enforcement methods.
Psychological studies have produced models of human thought (e.g., @cite_7 ). However, these are too low-level and incomplete for our needs @cite_42 . The GOMS formalism provides a higher level model, but is limited to selecting behavior using simple planning approaches @cite_14 . Simon's approach of @cite_37 and related heuristic-based approaches @cite_20 model more complex planning, but with less precise predictions.
{ "cite_N": [ "@cite_37", "@cite_14", "@cite_7", "@cite_42", "@cite_20" ], "mid": [ "2148962857", "2148334620", "2136518234", "1569677744", "1989020845" ], "abstract": [ "Introduction, 99. — I. Some general features of rational choice, 100.— II. The essential simplifications, 103. — III. Existence and uniqueness of solutions, 111. — IV. Further comments on dynamics, 113. — V. Conclusion, 114. — Appendix, 115.", "Sine the publication of The Psychology of Human-Computer Interaction , the GOMS model has been one of the most widely known theoretical concepts in HCI. This concept has produced severval GOMS analysis techniques that differ in appearance and form, underlying architectural assumptions, and predictive power. This article compares and contrasts four popular variantsof the GOMS family (the Keystroke-Level Model, the original GOMS formulation, NGOMSL, and CPM-GOMS) by applying them to a single task example.", "Adaptive control of thought–rational (ACT–R; J. R. Anderson & C. Lebiere, 1998) has evolved into a theory that consists of multiple modules but also explains how these modules are integrated to produce coherent cognition. The perceptual-motor modules, the goal module, and the declarative memory module are presented as examples of specialized systems in ACT–R. These modules are associated with distinct cortical regions. These modules place chunks in buffers where they can be detected by a production system that responds to patterns of information in the buffers. At any point in time, a single production rule is selected to respond to the current pattern. Subsymbolic processes serve to guide the selection of rules to fire as well as the internal operations of some modules. Much of learning involves tuning of these subsymbolic processes. A number of simple and complex empirical examples are described to illustrate how these modules function singly and in concert.", "Planning and the Cognitive Paradigm Planning and the PASS Theory The Neuropsychology of Planning The Development of Planning Search and Planning Conceptual Planning New Directions", "In a complex and uncertain world, humans and animals make decisions under the constraints of limited knowledge, resources, and time. Yet models of rational decision making in economics, cognitive science, biology, and other fields largely ignore these real constraints and instead assume agents with perfect information and unlimited time. About forty years ago, Herbert Simon challenged this view with his notion of \"bounded rationality.\" Today, bounded rationality has become a fashionable term used for disparate views of reasoning. This book promotes bounded rationality as the key to understanding how real people make decisions. Using the concept of an \"adaptive toolbox,\" a repertoire of fast and frugal rules for decision making under uncertainty, it attempts to impose more order and coherence on the idea of bounded rationality. The contributors view bounded rationality neither as optimization under constraints nor as the study of people's reasoning fallacies. The strategies in the adaptive toolbox dispense with optimization and, for the most part, with calculations of probabilities and utilities. The book extends the concept of bounded rationality from cognitive tools to emotions; it analyzes social norms, imitation, and other cultural tools as rational strategies; and it shows how smart heuristics can exploit the structure of environments." ] }
1102.4104
1557065471
Discriminative patterns are association patterns that occur with disproportionate frequency in some classes versus others, and have been studied under names such as emerging patterns and contrast sets. Such patterns have demonstrated considerable value for classification and subgroup discovery, but a detailed understanding of the types of interactions among items in a discriminative pattern is lacking. To address this issue, we propose to categorize discriminative patterns according to four types of item interaction: (i) driver-passenger, (ii) coherent, (iii) independent additive and (iv) synergistic beyond independent additive. Either of the last three is of practical importance, with the latter two representing a gain in the discriminative power of a pattern over its subsets. Synergistic patterns are most restrictive, but perhaps the most interesting since they capture a cooperative effect. For domains such as genetic research, differentiating among these types of patterns is critical since each yields very different biological interpretations. For general domains, the characterization provides a novel view of the nature of the discriminative patterns in a dataset, which yields insights beyond those provided by current approaches that focus mostly on pattern-based classification and subgroup discovery. This paper presents a comprehensive discussion that defines these four pattern types and investigates their properties and their relationship to one another. In addition, these ideas are explored for a variety of datasets (ten UCI datasets, one gene expression dataset and two genetic-variation datasets). The results demonstrate the existence, characteristics and statistical significance of the different types of patterns. They also illustrate how pattern characterization can provide novel insights into discriminative pattern mining and the discriminative structure of different datasets.
Over the past decade, many approaches have studied discriminative patterns and related topics. The most relevant related work was discussed earlier in Section . Among other work focusing on mining discriminative patterns, the most relevant ones are @cite_28 @cite_37 @cite_39 @cite_22 . Many existing approaches also used discriminative pattern for classification @cite_4 @cite_11 @cite_35 @cite_14 @cite_34 @cite_10 . Additional related papers in the area include @cite_39 @cite_24 @cite_37 @cite_31 @cite_36 @cite_43 . We also refer the readers to a comprehensive survey on discriminative patterns by @cite_1 .
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_14", "@cite_4", "@cite_22", "@cite_28", "@cite_36", "@cite_1", "@cite_39", "@cite_24", "@cite_43", "@cite_31", "@cite_34", "@cite_10", "@cite_11" ], "mid": [ "2116396873", "2004809831", "", "2154642793", "2029817244", "2126146704", "1522344466", "2156821882", "", "2151699449", "2058704078", "44842662", "2164281374", "2112558645", "2117169652" ], "abstract": [ "The application of frequent patterns in classification has demonstrated its power in recent studies. It often adopts a two-step approach: frequent pattern (or classification rule) mining followed by feature selection (or rule ranking). However, this two-step process could be computationally expensive, especially when the problem scale is large or the minimum support is low. It was observed that frequent pattern mining usually produces a huge number of \"patterns\" that could not only slow down the mining process but also make feature selection hard to complete. In this paper, we propose a direct discriminative pattern mining approach, DDPMine, to tackle the efficiency issue arising from the two-step approach. DDPMine performs a branch-and-bound search for directly mining discriminative patterns without generating the complete pattern set. Instead of selecting best patterns in a batch, we introduce a \"feature-centered\" mining approach that generates discriminative patterns sequentially on a progressively shrinking FP-tree by incrementally eliminating training instances. The instance elimination effectively reduces the problem size iteratively and expedites the mining process. Empirical results show that DDPMine achieves orders of magnitude speedup without any downgrade of classification accuracy. It outperforms the state-of-the-art associative classification methods in terms of both accuracy and efficiency.", "Patterns of contrast are a very important way of comparing multi-dimensional datasets. Such patterns are able to capture regions of high difference between two classes of data, and are useful for human experts and the construction of classifiers. However, mining such patterns is particularly challenging when the number of dimensions is large. This paper describes a new technique for mining several varieties of contrast pattern, based on the use of Zero-Suppressed Binary Decision Diagrams (ZBDDs), a powerful data structure for manipulating sparse data. We study the mining of both simple contrast patterns, such as emerging patterns, and more novel and complex contrasts, which we call disjunctive emerging patterns. A performance study demonstrates our ZBDD technique is highly scalable, substantially improves on state of the art mining for emerging patterns and can be effective for discovering complex contrasts from datasets with thousands of attributes.", "", "Classification rule mining aims to discover a small set of rules in the database that forms an accurate classifier. Association rule mining finds all the rules existing in the database that satisfy some minimum support and minimum confidence constraints. For association rule mining, the target of discovery is not pre-determined, while for classification rule mining there is one and only one predetermined target. In this paper, we propose to integrate these two mining techniques. The integration is done by focusing on mining a special subset of association rules, called class association rules (CARs). An efficient algorithm is also given for building a classifier based on the set of discovered CARs. Experimental results show that the classifier built this way is, in general, more accurate than that produced by the state-of-the-art classification system C4.5. In addition, this integration helps to solve a number of problems that exist in the current classification systems.", "We study how to efficiently compute significant association rules according to common statistical measures such as a chi-squared value or correlation coefficient. For this purpose, one might consider to use of the Apriori algorithm, but the algorithm needs major conversion, because none of these statistical metrics are anti-monotone, and the use of higher support for reducing the search space cannot guarantee solutions in its the search space. We here present a method of estimating a tight upper bound on the statistical metric associated with any superset of an itemset, as well as the novel use of the resulting information of upper bounds to prune unproductive supersets while traversing itemset lattices. Experimental tests demonstrate the efficiency of this method.", "Classification aims to discover a model from training data that can be used to predict the class of test instances. In this paper, we propose the use of jumping emerging patterns (JEPs) as the basis for a new classifier called the JEP-Classifier. Each JEP can capture some crucial difference between a pair of datasets. Then, aggregating all JEPs of large supports can produce more potent classification power. Procedurally, the JEP-Classifier learns the pair-wise features (sets of JEPs) contained in the training data, and uses the collective impacts contributed by the most expressive pair-wise features to determine the class labels of the test data. Using only the most expressive JEPs in the JEP-Classifier strengthens its resistance to noise in the training data, and reduces its complexity (as there are usually a very large number of JEPs). We use two algorithms for constructing the JEP-Classifier which are both scalable and efficient. These algorithms make use of the border representation to efficiently store and manipulate JEPs. We also present experimental results which show that the JEP-Classifier achieves much higher testing accuracies than the association-based classifier of [8], which was reported to outperform C4.5 in general.", "Emerging patterns (EPs) are associations of features whose frequencies increase significantly from one class to another. They have been proven useful to build powerful classifiers and to help establishing diagnosis. Because of the huge search space, mining and representing EPs is a hard task for large datasets. Thanks to the use of recent results on condensed representations of frequent closed patterns, we propose here an exact condensed representation of EPs. We also give a method to provide EPs with the highest growth rates, we call them strong emerging patterns (SEPs). In collaboration with the Philips company, experiments show the interests of SEPs.", "This paper gives a survey of contrast set mining (CSM), emerging pattern mining (EPM), and subgroup discovery (SD) in a unifying framework named supervised descriptive rule discovery. While all these research areas aim at discovering patterns in the form of rules induced from labeled data, they use different terminology and task definitions, claim to have different goals, claim to use different rule learning heuristics, and use different means for selecting subsets of induced patterns. This paper contributes a novel understanding of these subareas of data mining by presenting a unified terminology, by explaining the apparent differences between the learning tasks as variants of a unique supervised descriptive rule discovery task and by exploring the apparent differences between the approaches. It also shows that various rule learning heuristics used in CSM, EPM and SD algorithms all aim at optimizing a trade off between rule coverage and precision. The commonalities (and differences) between the approaches are showcased on a selection of best known variants of CSM, EPM and SD algorithms. The paper also provides a critical survey of existing supervised descriptive rule discovery visualization methods.", "", "This paper addresses a data analysis task, known as contrast set mining, whose goal is to find differences between contrasting groups. As a methodological novelty, it is shown that this task can be effectively solved by transforming it to a more common and well-understood subgroup discovery task. The transformation is studied in two learning settings, a one-versus-all and a pairwise contrast set mining setting, uncovering the conditions for each of the two choices. Moreover, the paper shows that the explanatory potential of discovered contrast sets can be improved by offering additional contrast set descriptors, called the supporting factors. The proposed methodology has been applied to uncover distinguishing characteristics of two groups of brain stroke patients, both with rapidly developing loss of brain function due to ischemia:those with ischemia caused by thrombosis and by embolism, respectively.", "Discriminative patterns can provide valuable insights into data sets with class labels, that may not be available from the individual features or the predictive models built using them. Most existing approaches work efficiently for sparse or low-dimensional data sets. However, for dense and high-dimensional data sets, they have to use high thresholds to produce the complete results within limited time, and thus, may miss interesting low-support patterns. In this paper, we address the necessity of trading off the completeness of discriminative pattern discovery with the efficient discovery of low-support discriminative patterns from such data sets. We propose a family of antimonotonic measures named SupMaxKthat organize the set of discriminative patterns into nested layers of subsets, which are progressively more complete in their coverage, but require increasingly more computation. In particular, the member of SupMaxK with K = 2, named SupMaxPair, is suitable for dense and high-dimensional data sets. Experiments on both synthetic data sets and a cancer gene expression data set demonstrate that there are low-support patterns that can be discovered using SupMaxPair but not by existing approaches. Furthermore, we show that the low-support discriminative patterns that are only discovered using SupMaxPair from the cancer gene expression data set are statistically significant and biologically relevant. This illustrates the complementarity of SupMaxPairXo existing approaches for discriminative pattern discovery. The codes and data set for this paper are available at http: vk.cs.umn.edu SMP .", "Closed sets have been proven successful in the context of compacted data representation for association rule learning. However, their use is mainly descriptive, dealing only with unlabeled data. This paper shows that when considering labeled data, closed sets can be adapted for classification and discrimination purposes by conveniently contrasting covering properties on positive and negative examples. We formally prove that these sets characterize the space of relevant combinations of features for discriminating the target class. In practice, identifying relevant irrelevant combinations of features through closed sets is useful in many applications: to compact emerging patterns of typical descriptive mining applications, to reduce the number of essential rules in classification, and to efficiently learn subgroup descriptions, as demonstrated in real-life subgroup discovery experiments on a high dimensional microarray data set.", "With ever-increasing amounts of graph data from disparate sources, there has been a strong need for exploiting significant graph patterns with user-specified objective functions. Most objective functions are not antimonotonic, which could fail all of frequency-centric graph mining algorithms. In this paper, we give the first comprehensive study on general mining method aiming to find most significant patterns directly. Our new mining framework, called LEAP (Descending Leap Mine), is developed to exploit the correlation between structural similarity and significance similarity in a way that the most significant pattern could be identified quickly by searching dissimilar graph patterns. Two novel concepts, structural leap search and frequency descending mining, are proposed to support leap search in graph pattern space. Our new mining method revealed that the widely adopted branch-and-bound search in data mining literature is indeed not the best, thus sketching a new picture on scalable graph pattern discovery. Empirical results show that LEAP achieves orders of magnitude speedup in comparison with the state-of-the-art method. Furthermore, graph classifiers built on mined patterns outperform the up-to-date graph kernel method in terms of efficiency and accuracy, demonstrating the high promise of such patterns.", "Software is a ubiquitous component of our daily life. We often depend on the correct working of software systems. Due to the difficulty and complexity of software systems, bugs and anomalies are prevalent. Bugs have caused billions of dollars loss, in addition to privacy and security threats. In this work, we address software reliability issues by proposing a novel method to classify software behaviors based on past history or runs. With the technique, it is possible to generalize past known errors and mistakes to capture failures and anomalies. Our technique first mines a set of discriminative features capturing repetitive series of events from program execution traces. It then performs feature selection to select the best features for classification. These features are then used to train a classifier to detect failures. Experiments and case studies on traces of several benchmark software systems and a real-life concurrency bug from MySQL server show the utility of the technique in capturing failures and anomalies. On average, our pattern-based classification technique outperforms the baseline approach by 24.68 in accuracy.", "The application of frequent patterns in classification appeared in sporadic studies and achieved initial success in the classification of relational data, text documents and graphs. In this paper, we conduct a systematic exploration of frequent pattern-based classification, and provide solid reasons supporting this methodology. It was well known that feature combinations (patterns) could capture more underlying semantics than single features. However, inclusion of infrequent patterns may not significantly improve the accuracy due to their limited predictive power. By building a connection between pattern frequency and discriminative measures such as information gain and Fisher score, we develop a strategy to set minimum support in frequent pattern mining for generating useful patterns. Based on this strategy, coupled with a proposed feature selection algorithm, discriminative frequent patterns can be generated for building high quality classifiers. We demonstrate that the frequent pattern-based classification framework can achieve good scalability and high accuracy in classifying large datasets. Empirical studies indicate that significant improvement in classification accuracy is achieved (up to 12 in UCI datasets) using the so-selected discriminative frequent patterns." ] }
1102.4639
1865535204
The random walk is fundamental to modeling dynamic processes on networks. Metrics based on the random walk have been used in many applications from image processing to Web page ranking. However, how appropriate are random walks to modeling and analyzing social networks? We argue that unlike a random walk, which conserves the quantity diffusing on a network, many interesting social phenomena, such as the spread of information or disease on a social network, are fundamentally non-conservative. When an individual infects her neighbor with a virus, the total amount of infection increases. We classify diffusion processes as conservative and non-conservative and show how these differences impact the choice of metrics used for network analysis, as well as our understanding of network structure and behavior. We show that Alpha-Centrality, which mathematically describes non-conservative diffusion, leads to new insights into the behavior of spreading processes on networks. We give a scalable approximate algorithm for computing the Alpha-Centrality in a massive graph. We validate our approach on real-world online social networks of Digg. We show that a non-conservative metric, such as Alpha-Centrality, produces better agreement with empirical measure of influence than conservative metrics, such as PageRank. We hope that our investigation will inspire further exploration into the realms of conservative and non-conservative metrics in social network analysis.
The interplay of the structural properties of the underlying network with the diffusion processes occurring in it, contributes to the complexity of real-life networks. For example in epidemiology, the dynamics of disease spread on a network and the epidemic threshold is closely related to its spectral radius of the graph @cite_27 . Similarly, random walk on a graph is closely related Laplacian of the graph @cite_33 .
{ "cite_N": [ "@cite_27", "@cite_33" ], "mid": [ "1501825524", "2096188852" ], "abstract": [ "How will a virus propagate in a real network? Does an epidemic threshold exist for a finite graph? How long does it take to disinfect a network given particular values of infection rate and virus death rate? We answer the first question by providing equations that accurately model virus propagation in any network including real and synthesized network graphs. We propose a general epidemic threshold condition that applies to arbitrary graphs: we prove that, under reasonable approximations, the epidemic threshold for a network is closely related to the largest eigenvalue of its adjacency matrix. Finally, for the last question, we show that infections tend to zero exponentially below the epidemic threshold. We show that our epidemic threshold model subsumes many known thresholds for special-case graphs (e.g., Erdos-Renyi, BA power-law, homogeneous); we show that the threshold tends to zero for infinite power-law graphs. We show that our threshold condition holds for arbitrary graphs.", "We examine the relationship between PageRank and several invariants occurring in the study of random walks and electrical networks. We consider a generalized version of hitting time and effective resistance with an additional parameter which controls the ‘speed’ of diffusion. We will establish their connection with PageRank. Through these connections, a combinatorial interpretation of Page-Rank is given in terms of rooted spanning forests by using a generalized version of the matrix-tree theorem. Using PageRank, we will illustrate that the generalized hitting time leads to finding sparse cuts and efficient approximation algorithms for PageRank can be used for approximating hitting time and effective resistance." ] }
1102.3493
2139848116
In distributed storage systems built using commodity hardware, it is necessary to have data redundancy in order to ensure system reliability. In such systems, it is also often desirable to be able to quickly repair storage nodes that fail. We consider a scheme — introduced by El Rouayheb and Ramchandran — which uses combinatorial block design in order to design storage systems that enable efficient (and exact) node repair. In this work, we investigate systems where node sizes may be much larger than replication degrees, and explicitly provide algorithms for constructing these storage designs. Our designs, which are related to projective geometries, are based on the construction of bipartite cage graphs (with girth 6) and the concept of mutually-orthogonal Latin squares. Via these constructions, we can guarantee that the resulting designs require the fewest number of storage nodes for the given parameters, and can further show that these systems can be easily expanded without need for frequent reconfiguration.
The problem of distributed storage with efficient repair is discussed in @cite_20 . Using network coding, the authors propose a scheme for storing data where node repair is functional. @cite_20 also define the idea of a storage-bandwidth tradeoff, and discuss ways to implement either minimum storage or minimum bandwidth systems. Even though exact repair of storage nodes is sometimes necessary, the storage-bandwidth tradeoff under exact repair is not yet fully understood. Building upon the network coding constructions of @cite_20 , @cite_3 give a scheme for achieving the minimum bandwidth operating point under exact repair, finding a point on the storage-bandwidth tradeoff curve.
{ "cite_N": [ "@cite_3", "@cite_20" ], "mid": [ "2111324462", "2951800112" ], "abstract": [ "In the distributed storage setting that we consider, data is stored across n nodes in the network such that the data can be recovered by connecting to any subset of k nodes. Additionally, one can repair a failed node by connecting to any d nodes while downloading β units of data from each. show that the repair bandwidth dβ can be considerably reduced if each node stores slightly more than the minimum required and characterize the tradeoff between the amount of storage per node and the repair bandwidth.", "Distributed storage systems provide reliable access to data through redundancy spread over individually unreliable nodes. Application scenarios include data centers, peer-to-peer storage systems, and storage in wireless networks. Storing data using an erasure code, in fragments spread across nodes, requires less redundancy than simple replication for the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate encoded fragments in a distributed way while transferring as little data as possible across the network. For an erasure coded system, a common practice to repair from a node failure is for a new node to download subsets of data stored at a number of surviving nodes, reconstruct a lost coded block using the downloaded data, and store it at the new node. We show that this procedure is sub-optimal. We introduce the notion of regenerating codes, which allow a new node to download of the stored data from the surviving nodes. We show that regenerating codes can significantly reduce the repair bandwidth. Further, we show that there is a fundamental tradeoff between storage and repair bandwidth which we theoretically characterize using flow arguments on an appropriately constructed graph. By invoking constructive results in network coding, we introduce regenerating codes that can achieve any point in this optimal tradeoff." ] }