text
stringlengths
17
3.36M
source
stringlengths
3
333
__index_level_0__
int64
0
518k
Makespan minimization on identical parallel machines is a classical scheduling problem. We consider the online scenario where a sequence of $n$ jobs has to be scheduled non-preemptively on $m$ machines so as to minimize the maximum completion time of any job. The best competitive ratio that can be achieved by deterministic online algorithms is in the range $[1.88,1.9201]$. Currently no randomized online algorithm with a smaller competitiveness is known, for general $m$. In this paper we explore the power of job migration, i.e.\ an online scheduler is allowed to perform a limited number of job reassignments. Migration is a common technique used in theory and practice to balance load in parallel processing environments. As our main result we settle the performance that can be achieved by deterministic online algorithms. We develop an algorithm that is $\alpha_m$-competitive, for any $m\geq 2$, where $\alpha_m$ is the solution of a certain equation. For $m=2$, $\alpha_2 = 4/3$ and $\lim_{m\rightarrow \infty} \alpha_m = W_{-1}(-1/e^2)/(1+ W_{-1}(-1/e^2)) \approx 1.4659$. Here $W_{-1}$ is the lower branch of the Lambert $W$ function. For $m\geq 11$, the algorithm uses at most $7m$ migration operations. For smaller $m$, $8m$ to $10m$ operations may be performed. We complement this result by a matching lower bound: No online algorithm that uses $o(n)$ job migrations can achieve a competitive ratio smaller than $\alpha_m$. We finally trade performance for migrations. We give a family of algorithms that is $c$-competitive, for any $5/3\leq c \leq 2$. For $c= 5/3$, the strategy uses at most $4m$ job migrations. For $c=1.75$, at most $2.5m$ migrations are used.
On the Value of Job Migration in Online Makespan Minimization
4,800
Balanced allocation of online balls-into-bins has long been an active area of research for efficient load balancing and hashing applications.There exists a large number of results in this domain for different settings, such as parallel allocations~\cite{parallel}, multi-dimensional allocations~\cite{multi}, weighted balls~\cite{weight} etc. For sequential multi-choice allocation, where $m$ balls are thrown into $n$ bins with each ball choosing $d$ (constant) bins independently uniformly at random, the maximum load of a bin is $O(\log \log n) + m/n$ with high probability~\cite{heavily_load}. This offers the current best known allocation scheme. However, for $d = \Theta(\log n)$, the gap reduces to $O(1)$~\cite{soda08}.A similar constant gap bound has been established for parallel allocations with $O(\log ^*n)$ communication rounds~\cite{lenzen}. In this paper we propose a novel multi-choice allocation algorithm, \emph{Improved D-choice with Estimated Average} ($IDEA$) achieving a constant gap with a high probability for the sequential single-dimensional online allocation problem with constant $d$. We achieve a maximum load of $\lceil m/n \rceil$ with high probability for constant $d$ choice scheme with \emph{expected} constant number of retries or rounds per ball. We also show that the bound holds even for an arbitrary large number of balls, $m>>n$. Further, we generalize this result to (i)~the weighted case, where balls have weights drawn from an arbitrary weight distribution with finite variance, (ii)~multi-dimensional setting, where balls have $D$ dimensions with $f$ randomly and uniformly chosen filled dimension for $m=n$, and (iii)~the parallel case, where $n$ balls arrive and are placed parallely in the bins. We show that the gap in these case is also a constant w.h.p. (independent of $m$) for constant value of $d$ with expected constant number of retries per ball.
Perfectly Balanced Allocation With Estimated Average Using Expected Constant Retries
4,801
One of the motivations for property testing of boolean functions is the idea that testing can serve as a preprocessing step before learning. However, in most machine learning applications, it is not possible to request for labels of fictitious examples constructed by the algorithm. Instead, the dominant query paradigm in applied machine learning, called active learning, is one where the algorithm may query for labels, but only on points in a given polynomial-sized (unlabeled) sample, drawn from some underlying distribution D. In this work, we bring this well-studied model in learning to the domain of testing. We show that for a number of important properties, testing can still yield substantial benefits in this setting. This includes testing unions of intervals, testing linear separators, and testing various assumptions used in semi-supervised learning. In addition to these specific results, we also develop a general notion of the testing dimension of a given property with respect to a given distribution. We show this dimension characterizes (up to constant factors) the intrinsic number of label requests needed to test that property. We develop such notions for both the active and passive testing models. We then use these dimensions to prove a number of lower bounds, including for linear separators and the class of dictator functions. Our results show that testing can be a powerful tool in realistic models for learning, and further that active testing exhibits an interesting and rich structure. Our work in addition brings together tools from a range of areas including U-statistics, noise-sensitivity, self-correction, and spectral analysis of random matrices, and develops new tools that may be of independent interest.
Active Property Testing
4,802
Cheeger's fundamental inequality states that any edge-weighted graph has a vertex subset $S$ such that its expansion (a.k.a. conductance) is bounded as follows: \[ \phi(S) \defeq \frac{w(S,\bar{S})}{\min \set{w(S), w(\bar{S})}} \leq 2\sqrt{\lambda_2} \] where $w$ is the total edge weight of a subset or a cut and $\lambda_2$ is the second smallest eigenvalue of the normalized Laplacian of the graph. Here we prove the following natural generalization: for any integer $k \in [n]$, there exist $ck$ disjoint subsets $S_1, ..., S_{ck}$, such that \[ \max_i \phi(S_i) \leq C \sqrt{\lambda_{k} \log k} \] where $\lambda_i$ is the $i^{th}$ smallest eigenvalue of the normalized Laplacian and $c<1,C>0$ are suitable absolute constants. Our proof is via a polynomial-time algorithm to find such subsets, consisting of a spectral projection and a randomized rounding. As a consequence, we get the same upper bound for the small set expansion problem, namely for any $k$, there is a subset $S$ whose weight is at most a $\bigO(1/k)$ fraction of the total weight and $\phi(S) \le C \sqrt{\lambda_k \log k}$. Both results are the best possible up to constant factors. The underlying algorithmic problem, namely finding $k$ subsets such that the maximum expansion is minimized, besides extending sparse cuts to more than one subset, appears to be a natural clustering problem in its own right.
Many Sparse Cuts via Higher Eigenvalues
4,803
Advances in DNA sequencing technology will soon result in databases of thousands of genomes. Within a species, individuals' genomes are almost exact copies of each other; e.g., any two human genomes are 99.9% the same. Relative Lempel-Ziv (RLZ) compression takes advantage of this property: it stores the first genome uncompressed or as an FM-index, then compresses the other genomes with a variant of LZ77 that copies phrases only from the first genome. RLZ achieves good compression and supports fast random access; in this paper we show how to support fast search as well, thus obtaining an efficient compressed self-index.
A Compressed Self-Index for Genomic Databases
4,804
We present several new results about smoothed analysis of multiobjective optimization problems. Motivated by the discrepancy between worst-case analysis and practical experience, this line of research has gained a lot of attention in the last decade. We consider problems in which d linear and one arbitrary objective function are to be optimized over a subset S of {0,1}^n of feasible solutions. We improve the previously best known bound for the smoothed number of Pareto-optimal solutions to O(n^{2d} phi^d), where phi denotes the perturbation parameter. Additionally, we show that for any constant c the c-th moment of the smoothed number of Pareto-optimal solutions is bounded by O((n^{2d} phi^d)^c). This improves the previously best known bounds significantly. Furthermore, we address the criticism that the perturbations in smoothed analysis destroy the zero-structure of problems by showing that the smoothed number of Pareto-optimal solutions remains polynomially bounded even for zero-preserving perturbations. This broadens the class of problems captured by smoothed analysis and it has consequences for non-linear objective functions. One corollary of our result is that the smoothed number of Pareto-optimal solutions is polynomially bounded for polynomial objective functions.
Improved Smoothed Analysis of Multiobjective Optimization
4,805
We present a general method for de-amortizing essentially any Binary Search Tree (BST) algorithm. In particular, by transforming Splay Trees, our method produces a BST that has the same asymptotic cost as Splay Trees on any access sequence while performing each search in O(log n) worst case time. By transforming Multi-Splay Trees, we obtain a BST that is O(log log n) competitive, satisfies the scanning theorem, the static optimality theorem, the static finger theorem, the working set theorem, and performs each search in O(log n) worst case time. Moreover, we prove that if there is a dynamically optimal BST algorithm, then there is a dynamically optimal BST algorithm that answers every search in O(log n) worst case time.
De-amortizing Binary Search Trees
4,806
A systematic technique to bound factor-revealing linear programs is presented. We show how to derive a family of upper bound factor-revealing programs (UPFRP), and show that each such program can be solved by a computer to bound the approximation factor of an associated algorithm. Obtaining an UPFRP is straightforward, and can be used as an alternative to analytical proofs, that are usually very long and tedious. We apply this technique to the Metric Facility Location Problem (MFLP) and to a generalization where the distance function is a squared metric. We call this generalization the Squared Metric Facility Location Problem (SMFLP) and prove that there is no approximation factor better than 2.04, assuming P $\neq$ NP. Then, we analyze the best known algorithms for the MFLP based on primal-dual and LP-rounding techniques when they are applied to the SMFLP. We prove very tight bounds for these algorithms, and show that the LP-rounding algorithm achieves a ratio of 2.04, and therefore has the best factor for the SMFLP. We use UPFRPs in the dual-fitting analysis of the primal-dual algorithms for both the SMFLP and the MFLP, improving some of the previous analysis for the MFLP.
A Systematic Approach to Bound Factor-Revealing LPs and its Application to the Metric and Squared Metric Facility Location Problems
4,807
In this work we consider the {\em image matching} problem for two grayscale $n \times n$ images, $M_1$ and $M_2$ (where pixel values range from 0 to 1). Our goal is to find an affine transformation $T$ that maps pixels from $M_1$ to pixels in $M_2$ so that the differences over pixels $p$ between $M_1(p)$ and $M_2(T(p))$ is minimized. Our focus here is on sublinear algorithms that give an approximate result for this problem, that is, we wish to perform this task while querying as few pixels from both images as possible, and give a transformation that comes close to minimizing the difference. We give an algorithm for the image matching problem that returns a transformation $T$ which minimizes the sum of differences (normalized by $n^2$) up to an additive error of $\epsilon$ and performs $\tilde{O}(n/\epsilon^2)$ queries. We give a corresponding lower bound of $\Omega(n)$ queries showing that this is the best possible result in the general case (with respect to $n$ and up to low order terms). In addition, we give a significantly better algorithm for a natural family of images, namely, smooth images. We consider an image smooth when the total difference between neighboring pixels is O(n). For such images we provide an approximation of the distance between the images to within an additive error of $\epsilon$ using a number of queries depending polynomially on $1/\epsilon$ and not on $n$. To do this we first consider the image matching problem for 2 and 3-dimensional {\em binary} images, and then reduce the grayscale image matching problem to the 3-dimensional binary case.
Tight Approximation of Image Matching
4,808
The existence of a polynomial kernel for Odd Cycle Transversal was a notorious open problem in parameterized complexity. Recently, this was settled by the present authors (Kratsch and Wahlstr\"om, SODA 2012), with a randomized polynomial kernel for the problem, using matroid theory to encode flow questions over a set of terminals in size polynomial in the number of terminals. In the current work we further establish the usefulness of matroid theory to kernelization by showing applications of a result on representative sets due to Lov\'asz (Combinatorial Surveys 1977) and Marx (TCS 2009). We show how representative sets can be used to give a polynomial kernel for the elusive Almost 2-SAT problem. We further apply the representative sets tool to the problem of finding irrelevant vertices in graph cut problems, i.e., vertices which can be made undeletable without affecting the status of the problem. This gives the first significant progress towards a polynomial kernel for the Multiway Cut problem; in particular, we get a kernel of O(k^{s+1}) vertices for Multiway Cut instances with at most s terminals. Both these kernelization results have significant spin-off effects, producing the first polynomial kernels for a range of related problems. More generally, the irrelevant vertex results have implications for covering min-cuts in graphs. For a directed graph G=(V,E) and sets S, T \subseteq V, let r be the size of a minimum (S,T)-vertex cut (which may intersect S and T). We can find a set Z \subseteq V of size O(|S|*|T|*r) which contains a minimum (A,B)-vertex cut for every A \subseteq S, B \subseteq T. Similarly, for an undirected graph G=(V,E), a set of terminals X \subseteq V, and a constant s, we can find a set Z\subseteq V of size O(|X|^{s+1}) which contains a minimum multiway cut for any partition of X into at most s pairwise disjoint subsets.
Representative sets and irrelevant vertices: New tools for kernelization
4,809
Sequence representations supporting queries $access$, $select$ and $rank$ are at the core of many data structures. There is a considerable gap between the various upper bounds and the few lower bounds known for such representations, and how they relate to the space used. In this article we prove a strong lower bound for $rank$, which holds for rather permissive assumptions on the space used, and give matching upper bounds that require only a compressed representation of the sequence. Within this compressed space, operations $access$ and $select$ can be solved in constant or almost-constant time, which is optimal for large alphabets. Our new upper bounds dominate all of the previous work in the time/space map.
Optimal Lower and Upper Bounds for Representing Sequences
4,810
In this paper, a fully compressed pattern matching problem is studied. The compression is represented by straight-line programs (SLPs), i.e. a context-free grammars generating exactly one string; the term fully means that both the pattern and the text are given in the compressed form. The problem is approached using a recently developed technique of local recompression: the SLPs are refactored, so that substrings of the pattern and text are encoded in both SLPs in the same way. To this end, the SLPs are locally decompressed and then recompressed in a uniform way. This technique yields an O((n+m)log M) algorithm for compressed pattern matching, assuming that M fits in O(1) machine words, where n (m) is the size of the compressed representation of the text (pattern, respectively), while M is the size of the decompressed pattern. If only m+n fits in O(1) machine words, the running time increases to O((n+m)log M log(n+m)). The previous best algorithm due to Lifshits had O(n^2m) running time.
Faster fully compressed pattern matching by recompression
4,811
Sieving is essential in different number theoretical algorithms. Sieving with large primes violates locality of memory access, thus degrading performance. Our suggestion on how to tackle this problem is to use cyclic data structures in combination with in-place bucket-sort. We present our results on the implementation of the sieve of Eratosthenes, using these ideas, which show that this approach is more robust and less affected by slow memory.
Cache optimized linear sieve
4,812
We consider the problem of scheduling on a single processor a given set of n jobs. Each job j has a workload w_j and a release time r_j. The processor can vary its speed and hibernate to reduce energy consumption. In a schedule minimizing overall consumed energy, it might be that some jobs complete arbitrarily far from their release time. So in order to guarantee some quality of service, we would like to impose a deadline d_j=r_j+F for every job j, where F is a guarantee on the *flow time*. We provide an O(n^3) algorithm for the more general case of *agreeable deadlines*, where jobs have release times and deadlines and can be ordered such that for every i<j, both r_i<=r_j and d_i<=d_j.
Speed scaling with power down scheduling for agreeable deadlines
4,813
We present a fully dynamic algorithm for the recognition of proper circular-arc (PCA) graphs. The allowed operations on the graph involve the insertion and removal of vertices (together with its incident edges) or edges. Edge operations cost O(log n) time, where n is the number of vertices of the graph, while vertex operations cost O(log n + d) time, where d is the degree of the modified vertex. We also show incremental and decremental algorithms that work in O(1) time per inserted or removed edge. As part of our algorithm, fully dynamic connectivity and co-connectivity algorithms that work in O(log n) time per operation are obtained. Also, an O(\Delta) time algorithm for determining if a PCA representation corresponds to a co-bipartite graph is provided, where \Delta\ is the maximum among the degrees of the vertices. When the graph is co-bipartite, a co-bipartition of each of its co-components is obtained within the same amount of time.
Fully dynamic recognition of proper circular-arc graphs
4,814
The debts' clearing problem is about clearing all the debts in a group of $n$ entities (e.g. persons, companies) using a minimal number of money transaction operations. In our previous works we studied the problem, gave a dynamic programming solution solving it and proved that it is NP-hard. In this paper we adapt the problem to dynamic graphs and give a data structure to solve it. Based on this data structure we develop a new algorithm, that improves our previous one for the static version of the problem.
The debts' clearing problem: a new approach
4,815
We describe an efficient FPGA implementation for the exponentiation of large matrices. The research is related to an algorithm for constructing uniformly distributed linear recurring sequences. The design utilizes the special properties of both the FPGA and the used matrices to achieve a very significant speedup compared to traditional architectures.
Modular exponentiation of matrices on FPGA-s
4,816
In this paper, after presenting the results of the generalization of Pascal triangle (using powers of base numbers), we examine some properties of the 112-based triangle, most of all regarding to prime numbers. Additionally, an effective implementation of ECPP method is presented which enables Magma computer algebra system to prove the primality of numbers with more than 1000 decimal digits.
Large primes in generalized Pascal triangles
4,817
We consider the (precedence constrained) Minimum Feedback Arc Set problem with triangle inequalities on the weights, which finds important applications in problems of ranking with inconsistent information. We present a surprising structural insight showing that the problem is a special case of the minimum vertex cover in hypergraphs with edges of size at most 3. This result leads to combinatorial approximation algorithms for the problem and opens the road to studying the problem as a vertex cover problem.
The Feedback Arc Set Problem with Triangle Inequality is a Vertex Cover Problem
4,818
Supporting top-k document retrieval queries on general text databases, that is, finding the k documents where a given pattern occurs most frequently, has become a topic of interest with practical applications. While the problem has been solved in optimal time and linear space, the actual space usage is a serious concern. In this paper we study various reduced-space structures that support top-k retrieval and propose new alternatives. Our experimental results show that our novel algorithms and data structures dominate almost all the space/time tradeoff.
Practical Top-K Document Retrieval in Reduced Space
4,819
We study the problem of constructing universal Steiner trees for undirected graphs. Given a graph $G$ and a root node $r$, we seek a single spanning tree $T$ of minimum {\em stretch}, where the stretch of $T$ is defined to be the maximum ratio, over all terminal sets $X$, of the cost of the minimal sub-tree $T_X$ of $T$ that connects $X$ to $r$ to the cost of an optimal Steiner tree connecting $X$ to $r$ in $G$. Universal Steiner trees (USTs) are important for data aggregation problems where computing the Steiner tree from scratch for every input instance of terminals is costly, as for example in low energy sensor network applications. We provide a polynomial time \ust\ construction for general graphs with $2^{O(\sqrt{\log n})}$-stretch. We also give a polynomial time $\polylog(n)$-stretch construction for minor-free graphs. One basic building block of our algorithms is a hierarchy of graph partitions, each of which guarantees small strong diameter for each cluster and bounded neighbourhood intersections for each node. We show close connections between the problems of constructing USTs and building such graph partitions. Our construction of partition hierarchies for general graphs is based on an iterative cluster merging procedure, while the one for minor-free graphs is based on a separator theorem for such graphs and the solution to a cluster aggregation problem that may be of independent interest even for general graphs. To our knowledge, this is the first subpolynomial-stretch ($o(n^\epsilon)$ for any $\epsilon > 0$) UST construction for general graphs, and the first polylogarithmic-stretch UST construction for minor-free graphs.
On Strong Graph Partitions and Universal Steiner Trees
4,820
Tries are popular data structures for storing a set of strings, where common prefixes are represented by common root-to-node paths. Over fifty years of usage have produced many variants and implementations to overcome some of their limitations. We explore new succinct representations of path-decomposed tries and experimentally evaluate the corresponding reduction in space usage and memory latency, comparing with the state of the art. We study two cases of applications: (1) a compressed dictionary for (compressed) strings, and (2) a monotone minimal perfect hash for strings that preserves their lexicographic order. For (1), we obtain data structures that outperform other state-of-the-art compressed dictionaries in space efficiency, while obtaining predictable query times that are competitive with data structures preferred by the practitioners. In (2), our tries perform several times faster than other trie-based monotone perfect hash functions, while occupying nearly the same space.
Fast Compressed Tries through Path Decompositions
4,821
Given a stream of items each associated with a numerical value, its edit distance to monotonicity is the minimum number of items to remove so that the remaining items are non-decreasing with respect to the numerical value. The space complexity of estimating the edit distance to monotonicity of a data stream is becoming well-understood over the past few years. Motivated by applications on network quality monitoring, we extend the study to estimating the edit distance to monotonicity of a sliding window covering the $w$ most recent items in the stream for any $w \ge 1$. We give a deterministic algorithm which can return an estimate within a factor of $(4+\eps)$ using $O(\frac{1}{\eps^2} \log^2(\eps w))$ space. We also extend the study in two directions. First, we consider a stream where each item is associated with a value from a partial ordered set. We give a randomized $(4+\epsilon)$-approximate algorithm using $O(\frac{1}{\epsilon^2} \log \epsilon^2 w \log w)$ space. Second, we consider an out-of-order stream where each item is associated with a creation time and a numerical value, and items may be out of order with respect to their creation times. The goal is to estimate the edit distance to monotonicity with respect to the numerical value of items arranged in the order of creation times. We show that any randomized constant-approximate algorithm requires linear space.
Edit Distance to Monotonicity in Sliding Windows
4,822
We describe a variant of the Bellman-Ford algorithm for single-source shortest paths in graphs with negative edges but no negative cycles that randomly permutes the vertices and uses this randomized order to process the vertices within each pass of the algorithm. The modification reduces the worst-case expected number of relaxation steps of the algorithm, compared to the previously-best variant by Yen (1970), by a factor of 2/3 with high probability. We also use our high probability bound to add negative cycle detection to the randomized algorithm.
Randomized Speedup of the Bellman-Ford Algorithm
4,823
The goal for the Directed Steiner Tree problem is to find a minimum cost tree in a directed graph G=(V,E) that connects all terminals X to a given root r. It is well known that modulo a logarithmic factor it suffices to consider acyclic graphs where the nodes are arranged in L <= log |X| levels. Unfortunately the natural LP formulation has a |X|^(1/2) integrality gap already for 5 levels. We show that for every L, the O(L)-round Lasserre Strengthening of this LP has integrality gap O(L log |X|). This provides a polynomial time |X|^{epsilon}-approximation and a O(log^3 |X|) approximation in O(n^{log |X|) time, matching the best known approximation guarantee obtained by a greedy algorithm of Charikar et al.
Directed Steiner Tree and the Lasserre Hierarchy
4,824
Skylines emerged as a useful notion in database queries for selecting representative groups in multivariate data samples for further decision making, multi-objective optimization or data processing, and the $k$-dominant skylines were naturally introduced to resolve the abundance of skylines when the dimensionality grows or when the coordinates are negatively correlated. We prove in this paper that the expected number of $k$-dominant skylines is asymptotically zero for large samples when $1\le k\le d-1$ under two reasonable (continuous) probability assumptions of the input points, $d$ being the (finite) dimensionality, in contrast to the asymptotic unboundedness when $k=d$. In addition to such an asymptotic zero-infinity property, we also establish a sharp threshold phenomenon for the expected ($d-1$)-dominant skylines when the dimensionality is allowed to grow with $n$. Several related issues such as the dominant cycle structures and numerical aspects, are also briefly studied.
Threshold phenomena in k-dominant skylines of random samples
4,825
We consider the problem of computing all-pairs shortest paths in a directed graph with real weights assigned to vertices. For an $n\times n$ 0-1 matrix $C,$ let $K_{C}$ be the complete weighted graph on the rows of $C$ where the weight of an edge between two rows is equal to their Hamming distance. Let $MWT(C)$ be the weight of a minimum weight spanning tree of $K_{C}.$ We show that the all-pairs shortest path problem for a directed graph $G$ on $n$ vertices with nonnegative real weights and adjacency matrix $A_G$ can be solved by a combinatorial randomized algorithm in time $$\widetilde{O}(n^{2}\sqrt {n + \min\{MWT(A_G), MWT(A_G^t)\}})$$ As a corollary, we conclude that the transitive closure of a directed graph $G$ can be computed by a combinatorial randomized algorithm in the aforementioned time. $\widetilde{O}(n^{2}\sqrt {n + \min\{MWT(A_G), MWT(A_G^t)\}})$ We also conclude that the all-pairs shortest path problem for uniform disk graphs, with nonnegative real vertex weights, induced by point sets of bounded density within a unit square can be solved in time $\widetilde{O}(n^{2.75})$.
A Combinatorial Algorithm for All-Pairs Shortest Paths in Directed Vertex-Weighted Graphs with Applications to Disc Graphs
4,826
In this note, we show that the integrality gap of the $k$-Directed-Component- Relaxation($k$-DCR) LP for the Steiner tree problem, introduced by Byrka, Grandoni, Rothvob and Sanita (STOC 2010), is at most $\ln(4)<1.39$. The proof is constructive: we can efficiently find a Steiner tree whose cost is at most $\ln(4)$ times the cost of the optimal fractional $k$-restricted Steiner tree given by the $k$-DCR LP.
On the Integrality Gap of the Directed-Component Relaxation for Steiner Tree
4,827
We give a complete characterization of the two-state anti-ferromagnetic spin systems which are of strong spatial mixing on general graphs. We show that a two-state anti-ferromagnetic spin system is of strong spatial mixing on all graphs of maximum degree at most $\Delta$ if and only if the system has a unique Gibbs measure on infinite regular trees of degree up to $\Delta$, where $\Delta$ can be either bounded or unbounded. As a consequence, there exists an FPTAS for the partition function of a two-state anti-ferromagnetic spin system on graphs of maximum degree at most $\Delta$ when the uniqueness condition is satisfied on infinite regular trees of degree up to $\Delta$. In particular, an FPTAS exists for arbitrary graphs if the uniqueness is satisfied on all infinite regular trees. This covers as special cases all previous algorithmic results for two-state anti-ferromagnetic systems on general-structure graphs. Combining with the FPRAS for two-state ferromagnetic spin systems of Jerrum-Sinclair and Goldberg-Jerrum-Paterson, and the hardness results of Sly-Sun and independently of Galanis-Stefankovic-Vigoda, this gives a complete classification, except at the phase transition boundary, of the approximability of all two-state spin systems, on either degree-bounded families of graphs or family of all graphs.
Correlation Decay up to Uniqueness in Spin Systems
4,828
In the semi-streaming model, an algorithm receives a stream of edges of a graph in arbitrary order and uses a memory of size $O(n \mbox{ polylog } n)$, where $n$ is the number of vertices of a graph. In this work, we present semi-streaming algorithms that perform one or two passes over the input stream for maximum matching with no restrictions on the input graph, and for the important special case of bipartite graphs that we refer to as maximum bipartite matching (MBM). The Greedy matching algorithm performs one pass over the input and outputs a $1/2$ approximation. Whether there is a better one-pass algorithm has been an open question since the appearance of the first paper on streaming algorithms for matching problems in 2005 [Feigenbaum et al., SODA 2005]. We make the following progress on this problem: In the one-pass setting, we show that there is a deterministic semi-streaming algorithm for MBM with expected approximation factor $1/2+0.005$, assuming that edges arrive one by one in (uniform) random order. We extend this algorithm to general graphs, and we obtain a $1/2+0.003$ approximation. In the two-pass setting, we do not require the random arrival order assumption (the edge stream is in arbitrary order). We present a simple randomized two-pass semi-streaming algorithm for MBM with expected approximation factor $1/2 + 0.019$. Furthermore, we discuss a more involved deterministic two-pass semi-streaming algorithm for MBM with approximation factor $1/2 + 0.019$ and a generalization of this algorithm to general graphs with approximation factor $1/2 + 0.0071$.
Maximum Matching in Semi-Streaming with Few Passes
4,829
Many problems in Computer Science can be abstracted to the following question: given a set of objects and rules respectively, which new objects can be produced? In the paper, we consider a succinct version of the question: given a set of binary strings and several operations like conjunction and disjunction, which new binary strings can be generated? Although it is a fundamental problem, to the best of our knowledge, the problem hasn't been studied yet. In this paper, an O(m^2n) algorithm is presented to determine whether a string s is representable by a set W, where n is the number of strings in W and each string has the same length m. However, looking for the minimum subset from a set to represent a given string is shown to be NP-hard. Also, finding the smallest subset from a set to represent each string in the original set is NP-hard. We establishes inapproximability results and approximation algorithms for them. In addition, we prove that counting the number of strings representable is #P-complete. We then explore how the problems change when the operator negation is available. For example, if the operator negation can be used, the number is some power of 2. This difference maybe help us understand the problem more profoundly.
Computing on Binary Strings
4,830
For a given set of intervals on the real line, we consider the problem of ordering the intervals with the goal of minimizing an objective function that depends on the exposed interval pieces (that is, the pieces that are not covered by earlier intervals in the ordering). This problem is motivated by an application in molecular biology that concerns the determination of the structure of the backbone of a protein. We present polynomial-time algorithms for several natural special cases of the problem that cover the situation where the interval boundaries are agreeably ordered and the situation where the interval set is laminar. Also the bottleneck variant of the problem is shown to be solvable in polynomial time. Finally we prove that the general problem is NP-hard, and that the existence of a constant-factor-approximation algorithm is unlikely.
The interval ordering problem
4,831
We consider the problem of detecting a cycle in a directed graph that grows by arc insertions, and the related problems of maintaining a topological order and the strong components of such a graph. For these problems, we give two algorithms, one suited to sparse graphs, and the other to dense graphs. The former takes the minimum of O(m^{3/2}) and O(mn^{2/3}) time to insert m arcs into an n-vertex graph; the latter takes O(n^2 log(n)) time. Our sparse algorithm is considerably simpler than a previous O(m^{3/2})-time algorithm; it is also faster on graphs of sufficient density. The time bound of our dense algorithm beats the previously best time bound of O(n^{5/2}) for dense graphs. Our algorithms rely for their efficiency on topologically ordered vertex numberings; bounds on the size of the numbers give bound on running times.
A New Approach to Incremental Cycle Detection and Related Problems
4,832
The {\em maximum cardinality} and {\em maximum weight matching} problems can be solved in time $\tilde{O}(m\sqrt{n})$, a bound that has resisted improvement despite decades of research. (Here $m$ and $n$ are the number of edges and vertices.) In this article we demonstrate that this "$m\sqrt{n}$ barrier" is extremely fragile, in the following sense. For any $\epsilon>0$, we give an algorithm that computes a $(1-\epsilon)$-approximate maximum weight matching in $O(m\epsilon^{-1}\log\epsilon^{-1})$ time, that is, optimal {\em linear time} for any fixed $\epsilon$. Our algorithm is dramatically simpler than the best exact maximum weight matching algorithms on general graphs and should be appealing in all applications that can tolerate a negligible relative error. Our second contribution is a new {\em exact} maximum weight matching algorithm for integer-weighted bipartite graphs that runs in time $O(m\sqrt{n}\log N)$. This improves on the $O(Nm\sqrt{n})$-time and $O(m\sqrt{n}\log(nN))$-time algorithms known since the mid 1980s, for $1\ll \log N \ll \log n$. Here $N$ is the maximum integer edge weight.
Scaling algorithms for approximate and exact maximum weight matching
4,833
We consider the classical problem of representing a collection of priority queues under the operations \Findmin{}, \Insert{}, \Decrease{}, \Meld{}, \Delete{}, and \Deletemin{}. In the comparison-based model, if the first four operations are to be supported in constant time, the last two operations must take at least logarithmic time. Brodal showed that his worst-case efficient priority queues achieve these worst-case bounds. Unfortunately, this data structure is involved and the time bounds hide large constants. We describe a new variant of the worst-case efficient priority queues that relies on extended regular counters and provides the same asymptotic time and space bounds as the original. Due to the conceptual separation of the operations on regular counters and all other operations, our data structure is simpler and easier to describe and understand. Also, the constants in the time and space bounds are smaller. In addition, we give an implementation of our structure on a pointer machine. For our pointer-machine implementation, \Decrease{} and \Meld{} are asymptotically slower and require $O(\lg\lg{n})$ worst-case time, where $n$ denotes the number of elements stored in the resulting priority queue.
Worst-Case Optimal Priority Queues via Extended Regular Counters
4,834
We consider online resource allocation problems where given a set of requests our goal is to select a subset that maximizes a value minus cost type of objective function. Requests are presented online in random order, and each request possesses an adversarial value and an adversarial size. The online algorithm must make an irrevocable accept/reject decision as soon as it sees each request. The "profit" of a set of accepted requests is its total value minus a convex cost function of its total size. This problem falls within the framework of secretary problems. Unlike previous work in that area, one of the main challenges we face is that the objective function can be positive or negative and we must guard against accepting requests that look good early on but cause the solution to have an arbitrarily large cost as more requests are accepted. This requires designing new techniques. We study this problem under various feasibility constraints and present online algorithms with competitive ratios only a constant factor worse than those known in the absence of costs for the same feasibility constraints. We also consider a multi-dimensional version of the problem that generalizes multi-dimensional knapsack within a secretary framework. In the absence of any feasibility constraints, we present an O(l) competitive algorithm where l is the number of dimensions; this matches within constant factors the best known ratio for multi-dimensional knapsack secretary.
Secretary Problems with Convex Costs
4,835
We study the complexity of some algorithmic problems on directed hypergraphs and their strongly connected components (SCCs). The main contribution is an almost linear time algorithm computing the terminal strongly connected components (i.e. SCCs which do not reach any components but themselves). "Almost linear" here means that the complexity of the algorithm is linear in the size of the hypergraph up to a factor alpha(n), where alpha is the inverse of Ackermann function, and n is the number of vertices. Our motivation to study this problem arises from a recent application of directed hypergraphs to computational tropical geometry. We also discuss the problem of computing all SCCs. We establish a superlinear lower bound on the size of the transitive reduction of the reachability relation in directed hypergraphs, showing that it is combinatorially more complex than in directed graphs. Besides, we prove a linear time reduction from the well-studied problem of finding all minimal sets among a given family to the problem of computing the SCCs. Only subquadratic time algorithms are known for the former problem. These results strongly suggest that the problem of computing the SCCs is harder in directed hypergraphs than in directed graphs.
On the complexity of strongly connected components in directed hypergraphs
4,836
We consider a natural generalization of the Partial Vertex Cover problem. Here an instance consists of a graph G = (V,E), a positive cost function c: V-> Z^{+}, a partition $P_1,..., P_r$ of the edge set $E$, and a parameter $k_i$ for each partition $P_i$. The goal is to find a minimum cost set of vertices which cover at least $k_i$ edges from the partition $P_i$. We call this the Partition Vertex Cover problem. In this paper, we give matching upper and lower bound on the approximability of this problem. Our algorithm is based on a novel LP relaxation for this problem. This LP relaxation is obtained by adding knapsack cover inequalities to a natural LP relaxation of the problem. We show that this LP has integrality gap of $O(log r)$, where $r$ is the number of sets in the partition of the edge set. We also extend our result to more general settings.
Approximation Algorithms for Edge Partitioned Vertex Cover Problems
4,837
Clustering a graph means identifying internally dense subgraphs which are only sparsely interconnected. Formalizations of this notion lead to measures that quantify the quality of a clustering and to algorithms that actually find clusterings. Since, most generally, corresponding optimization problems are hard, heuristic clustering algorithms are used in practice, or other approaches which are not based on an objective function. In this work we conduct a comprehensive experimental evaluation of the qualitative behavior of greedy bottom-up heuristics driven by cut-based objectives and constrained by intracluster density, using both real-world data and artificial instances. Our study documents that a greedy strategy based on local movement is superior to one based on merging. We further reveal that the former approach generally outperforms alternative setups and reference algorithms from the literature in terms of its own objective, while a modularity-based algorithm competes surprisingly well. Finally, we exhibit which combinations of cut-based inter- and intracluster measures are suitable for identifying a hidden reference clustering in synthetic random graphs.
Experiments on Density-Constrained Graph Clustering
4,838
We consider connectivity problems with orientation constraints. Given a directed graph $D$ and a collection of ordered node pairs $P$ let $P[D]=\{(u,v) \in P: D {contains a} uv{-path}}$. In the {\sf Steiner Forest Orientation} problem we are given an undirected graph $G=(V,E)$ with edge-costs and a set $P \subseteq V \times V$ of ordered node pairs. The goal is to find a minimum-cost subgraph $H$ of $G$ and an orientation $D$ of $H$ such that $P[D]=P$. We give a 4-approximation algorithm for this problem. In the {\sf Maximum Pairs Orientation} problem we are given a graph $G$ and a multi-collection of ordered node pairs $P$ on $V$. The goal is to find an orientation $D$ of $G$ such that $|P[D]|$ is maximum. Generalizing the result of Arkin and Hassin [DAM'02] for $|P|=2$, we will show that for a mixed graph $G$ (that may have both directed and undirected edges), one can decide in $n^{O(|P|)}$ time whether $G$ has an orientation $D$ with $P[D]=P$ (for undirected graphs this problem admits a polynomial time algorithm for any $P$, but it is NP-complete on mixed graphs). For undirected graphs, we will show that one can decide whether $G$ admits an orientation $D$ with $|P[D]| \geq k$ in $O(n+m)+2^{O(k\cdot \log \log k)}$ time; hence this decision problem is fixed-parameter tractable, which answers an open question from Dorn et al. [AMB'11]. We also show that {\sf Maximum Pairs Orientation} admits ratio $O(\log |P|/\log\log |P|)$, which is better than the ratio $O(\log n/\log\log n)$ of Gamzu et al. [WABI'10] when $|P|<n$. Finally, we show that the following node-connectivity problem can be solved in polynomial time: given a graph $G=(V,E)$ with edge-costs, $s,t \in V$, and an integer $\ell$, find a min-cost subgraph $H$ of $G$ with an orientation $D$ such that $D$ contains $\ell$ internally-disjoint $st$-paths, and $\ell$ internally-disjoint $ts$-paths.
Steiner Forest Orientation Problems
4,839
We consider some generalizations of the Asymmetric Traveling Salesman Path problem. Suppose we have an asymmetric metric G = (V,A) with two distinguished nodes s,t. We are also given a positive integer k. The goal is to find k paths of minimum total cost from s to t whose union spans all nodes. We call this the k-Person Asymmetric Traveling Salesmen Path problem (k-ATSPP). Our main result for k-ATSPP is a bicriteria approximation that, for some parameter b >= 1 we may choose, finds between k and k + k/b paths of total length O(b log |V|) times the optimum value of an LP relaxation based on the Held-Karp relaxation for the Traveling Salesman problem. On one extreme this is an O(log |V|)-approximation that uses up to 2k paths and on the other it is an O(k log |V|)-approximation that uses exactly k paths. Next, we consider the case where we have k pairs of nodes (s_1,t_1), ..., (s_k,t_k). The goal is to find an s_i-t_i path for every pair such that each node of G lies on at least one of these paths. Simple approximation algorithms are presented for the special cases where the metric is symmetric or where s_i = t_i for each i. We also show that the problem can be approximated within a factor O(log n) when k=2. On the other hand, we demonstrate that the general problem cannot be approximated within any bounded ratio unless P = NP.
Multiple Traveling Salesmen in Asymmetric Metrics
4,840
A tabulation-based hash function maps a key into d derived characters indexing random values in tables that are then combined with bitwise xor operations to give the hash. Thorup and Zhang (2004) presented d-wise independent tabulation-based hash classes that use linear maps over finite fields to map a key, considered as a vector (a,b), to derived characters. We show that a variant where the derived characters are a+b*i for i=0,..., q-1 (using integer arithmetic) yielding (2d-1)-wise independence. Our analysis is based on an algebraic property that characterizes k-wise independence of tabulation-based hashing schemes, and combines this characterization with a geometric argument. We also prove a non-trivial lower bound on the number of derived characters necessary for k-wise independence with our and related hash classes.
Independence of Tabulation-Based Hash Classes
4,841
We give an approximation algorithm for non-uniform sparsest cut with the following guarantee: For any $\epsilon,\delta \in (0,1)$, given cost and demand graphs with edge weights $C, D$ respectively, we can find a set $T\subseteq V$ with $\frac{C(T,V\setminus T)}{D(T,V\setminus T)}$ at most $\frac{1+\epsilon}{\delta}$ times the optimal non-uniform sparsest cut value, in time $2^{r/(\delta\epsilon)}\poly(n)$ provided $\lambda_r \ge \Phi^*/(1-\delta)$. Here $\lambda_r$ is the $r$'th smallest generalized eigenvalue of the Laplacian matrices of cost and demand graphs; $C(T,V\setminus T)$ (resp. $D(T,V\setminus T)$) is the weight of edges crossing the $(T,V\setminus T)$ cut in cost (resp. demand) graph and $\Phi^*$ is the sparsity of the optimal cut. In words, we show that the non-uniform sparsest cut problem is easy when the generalized spectrum grows moderately fast. To the best of our knowledge, there were no results based on higher order spectra for non-uniform sparsest cut prior to this work. Even for uniform sparsest cut, the quantitative aspects of our result are somewhat stronger than previous methods. Similar results hold for other expansion measures like edge expansion, normalized cut, and conductance, with the $r$'th smallest eigenvalue of the normalized Laplacian playing the role of $\lambda_r$ in the latter two cases. Our proof is based on an l1-embedding of vectors from a semi-definite program from the Lasserre hierarchy. The embedded vectors are then rounded to a cut using standard threshold rounding. We hope that the ideas connecting $\ell_1$-embeddings to Lasserre SDPs will find other applications. Another aspect of the analysis is the adaptation of the column selection paradigm from our earlier work on rounding Lasserre SDPs [GS11] to pick a set of edges rather than vertices. This feature is important in order to extend the algorithms to non-uniform sparsest cut.
Approximating Non-Uniform Sparsest Cut via Generalized Spectra
4,842
Domains like bioinformatics, version control systems, collaborative editing systems (wiki), and others, are producing huge data collections that are very repetitive. That is, there are few differences between the elements of the collection. This fact makes the compressibility of the collection extremely high. For example, a collection with all different versions of a Wikipedia article can be compressed up to the 0.1% of its original space, using the Lempel-Ziv 1977 (LZ77) compression scheme. Many of these repetitive collections handle huge amounts of text data. For that reason, we require a method to store them efficiently, while providing the ability to operate on them. The most common operations are the extraction of random portions of the collection and the search for all the occurrences of a given pattern inside the whole collection. A self-index is a data structure that stores a text in compressed form and allows to find the occurrences of a pattern efficiently. On the other hand, self-indexes can extract any substring of the collection, hence they are able to replace the original text. One of the main goals when using these indexes is to store them within main memory. In this thesis we present a scheme for random text extraction from text compressed with a Lempel-Ziv parsing. Additionally, we present a variant of LZ77, called LZ-End, that efficiently extracts text using space close to that of LZ77. The main contribution of this thesis is the first self-index based on LZ77/LZ-End and oriented to repetitive texts, which outperforms the state of the art (the RLCSA self-index) in many aspects. Finally, we present a corpus of repetitive texts, coming from several application domains. We aim at providing a standard set of texts for research and experimentation, hence this corpus is publicly available.
Self-Index based on LZ77 (thesis)
4,843
We resolve several fundamental questions in the area of distributed functional monitoring, initiated by Cormode, Muthukrishnan, and Yi (SODA, 2008). In this model there are $k$ sites each tracking their input and communicating with a central coordinator that continuously maintain an approximate output to a function $f$ computed over the union of the inputs. The goal is to minimize the communication. We show the randomized communication complexity of estimating the number of distinct elements up to a $1+\eps$ factor is $\tilde{\Omega}(k/\eps^2)$, improving the previous $\Omega(k + 1/\eps^2)$ bound and matching known upper bounds up to a logarithmic factor. For the $p$-th frequency moment $F_p$, $p > 1$, we improve the previous $\Omega(k + 1/\eps^2)$ communication bound to $\tilde{\Omega}(k^{p-1}/\eps^2)$. We obtain similar improvements for heavy hitters, empirical entropy, and other problems. We also show that we can estimate $F_p$, for any $p > 1$, using $\tilde{O}(k^{p-1}\poly(\eps^{-1}))$ communication. This greatly improves upon the previous $\tilde{O}(k^{2p+1}N^{1-2/p} \poly(\eps^{-1}))$ bound of Cormode, Muthukrishnan, and Yi for general $p$, and their $\tilde{O}(k^2/\eps + k^{1.5}/\eps^3)$ bound for $p = 2$. For $p = 2$, our bound resolves their main open question. Our lower bounds are based on new direct sum theorems for approximate majority, and yield significant improvements to problems in the data stream model, improving the bound for estimating $F_p, p > 2,$ in $t$ passes from $\tilde{\Omega}(n^{1-2/p}/(\eps^{2/p} t))$ to $\tilde{\Omega}(n^{1-2/p}/(\eps^{4/p} t))$, giving the first bound for estimating $F_0$ in $t$ passes of $\Omega(1/(\eps^2 t))$ bits of space that does not use the gap-hamming problem.
Tight Bounds for Distributed Functional Monitoring
4,844
With more than four billion usage of cellular phones worldwide, mobile advertising has become an attractive alternative to online advertisements. In this paper, we propose a new targeted advertising policy for Wireless Service Providers (WSPs) via SMS or MMS- namely {\em AdCell}. In our model, a WSP charges the advertisers for showing their ads. Each advertiser has a valuation for specific types of customers in various times and locations and has a limit on the maximum available budget. Each query is in the form of time and location and is associated with one individual customer. In order to achieve a non-intrusive delivery, only a limited number of ads can be sent to each customer. Recently, new services have been introduced that offer location-based advertising over cellular network that fit in our model (e.g., ShopAlerts by AT&T) . We consider both online and offline version of the AdCell problem and develop approximation algorithms with constant competitive ratio. For the online version, we assume that the appearances of the queries follow a stochastic distribution and thus consider a Bayesian setting. Furthermore, queries may come from different distributions on different times. This model generalizes several previous advertising models such as online secretary problem \cite{HKP04}, online bipartite matching \cite{KVV90,FMMM09} and AdWords \cite{saberi05}. ...
AdCell: Ad Allocation in Cellular Networks
4,845
In this paper we present an implicit dynamic dictionary with the working-set property, supporting insert(e) and delete(e) in O(log n) time, predecessor(e) in O(log l_{p(e)}) time, successor(e) in O(log l_{s(e)}) time and search(e) in O(log min(l_{p(e)},l_{e}, l_{s(e)})) time, where n is the number of elements stored in the dictionary, l_{e} is the number of distinct elements searched for since element e was last searched for and p(e) and s(e) are the predecessor and successor of e, respectively. The time-bounds are all worst-case. The dictionary stores the elements in an array of size n using no additional space. In the cache-oblivious model the log is base B and the cache-obliviousness is due to our black box use of an existing cache-oblivious implicit dictionary. This is the first implicit dictionary supporting predecessor and successor searches in the working-set bound. Previous implicit structures required O(log n) time.
Cache-Oblivious Implicit Predecessor Dictionaries with the Working Set Property
4,846
We consider the file maintenance problem (also called the online labeling problem) in which n integer items from the set {1,...,r} are to be stored in an array of size m >= n. The items are presented sequentially in an arbitrary order, and must be stored in the array in sorted order (but not necessarily in consecutive locations in the array). Each new item must be stored in the array before the next item is received. If r<=m then we can simply store item j in location j but if r>m then we may have to shift the location of stored items to make space for a newly arrived item. The algorithm is charged each time an item is stored in the array, or moved to a new location. The goal is to minimize the total number of such moves done by the algorithm. This problem is non-trivial when n=<m<r. In the case that m=Cn for some C>1, algorithms for this problem with cost O(log(n)^2) per item have been given [IKR81, Wil92, BCD+02]. When m=n, algorithms with cost O(log(n)^3) per item were given [Zha93, BS07]. In this paper we prove lower bounds that show that these algorithms are optimal, up to constant factors. Previously, the only lower bound known for this range of parameters was a lower bound of \Omega(log(n)^2) for the restricted class of smooth algorithms [DSZ05a, Zha93]. We also provide an algorithm for the sparse case: If the number of items is polylogarithmic in the array size then the problem can be solved in amortized constant time per item.
Tight lower bounds for online labeling problem
4,847
We study the parameterized complexity of a robust generalization of the classical Feedback Vertex Set problem, namely the Group Feedback Vertex Set problem; we are given a graph G with edges labeled with group elements, and the goal is to compute the smallest set of vertices that hits all cycles of G that evaluate to a non-null element of the group. This problem generalizes not only Feedback Vertex Set, but also Subset Feedback Vertex Set, Multiway Cut and Odd Cycle Transversal. Completing the results of Guillemot [Discr. Opt. 2011], we provide a fixed-parameter algorithm for the parameterization by the size of the cutset only. Our algorithm works even if the group is given as a polynomial-time oracle.
On group feedback vertex set parameterized by the size of the cutset
4,848
Given a vertex-labeled graph, each vertex $v$ is attached with a label from a set of labels. The vertex-label query desires the length of the shortest path from the given vertex to the set of vertices with the given label. We show how to construct an oracle if the given graph is planar, such that $O(\frac{1}{\epsilon}n\log n)$ storing space is needed, and any vertex-label query could be answered in $O(\frac{1}{\epsilon}\log n\log \rho)$ time with stretch $1+\epsilon$. $\rho$ is the radius of the given graph, which is half of the diameter. For the case that $\rho = O(\log n)$, we construct an oracle that achieves $O(\log n)$ query time, without changing the order of storing space.
(1+epsilon)-Distance Oracle for Planar Labeled Graph
4,849
The sudoku minimum number of clues problem is the following question: what is the smallest number of clues that a sudoku puzzle can have? For several years it had been conjectured that the answer is 17. We have performed an exhaustive computer search for 16-clue sudoku puzzles, and did not find any, thus proving that the answer is indeed 17. In this article we describe our method and the actual search. As a part of this project we developed a novel way for enumerating hitting sets. The hitting set problem is computationally hard; it is one of Karp's 21 classic NP-complete problems. A standard backtracking algorithm for finding hitting sets would not be fast enough to search for a 16-clue sudoku puzzle exhaustively, even at today's supercomputer speeds. To make an exhaustive search possible, we designed an algorithm that allowed us to efficiently enumerate hitting sets of a suitable size.
There is no 16-Clue Sudoku: Solving the Sudoku Minimum Number of Clues Problem
4,850
Some difficulties regarding the application of the well-known sieve method are considered in the case when a practical (program) realization of selecting elements, having a particular property among the elements of a set with a sufficiently great cardinal number(cardinality). In this paper the problem has been resolved by using a modified version of the method, utilizing multidimensional arrays. As a theoretical illustration of the method of the multidimensional sieve, the problem of obtaining a single representative of each equivalence class with respect to a given relation of equivalence and obtaining the cardinality of the respective factor set is considered with relevant mathematical proofs.
Method of the Multidimensional Sieve in the Practical Realization of some Combinatorial Algorithms
4,851
Trees are fundamental data structure for many areas of computer science and system engineering. In this report, we show how to ensure eventual consistency of optimistically replicated trees. In optimistic replication, the different replicas of a distributed system are allowed to diverge but should eventually reach the same value if no more mutations occur. A new method to ensure eventual consistency is to design Conflict-free Replicated Data Types (CRDT). In this report, we design a collection of tree CRDT using existing set CRDTs. The remaining concurrency problems particular to tree data structure are resolved using one or two layers of correction algorithm. For each of these layer, we propose different and independent policies. Any combination of set CRDT and policies can be constructed, giving to the distributed application programmer the entire control of the behavior of the shared data in face of concurrent mutations. We also propose to order these trees by adding a positioning layer which is also independent to obtain a collection of ordered tree CRDTs.
Abstract unordered and ordered trees CRDT
4,852
We briefly report on the current state of a new dynamic algorithm for the route planning problem based on a concept of scope (the static variant presented at ESA'11, HM2011A). We first motivate dynamization of the concept of scope admissibility, and then we briefly describe a modification of the scope-aware query algorithm of HM2011A to dynamic road networks. Finally, we outline our future work on this concept.
Dynamic Scope-Based Dijkstra's Algorithm
4,853
We consider the problem of computing the k-sparse approximation to the discrete Fourier transform of an n-dimensional signal. We show: * An O(k log n)-time randomized algorithm for the case where the input signal has at most k non-zero Fourier coefficients, and * An O(k log n log(n/k))-time randomized algorithm for general input signals. Both algorithms achieve o(n log n) time, and thus improve over the Fast Fourier Transform, for any k = o(n). They are the first known algorithms that satisfy this property. Also, if one assumes that the Fast Fourier Transform is optimal, the algorithm for the exactly k-sparse case is optimal for any k = n^{\Omega(1)}. We complement our algorithmic results by showing that any algorithm for computing the sparse Fourier transform of a general signal must use at least \Omega(k log(n/k)/ log log n) signal samples, even if it is allowed to perform adaptive sampling.
Nearly Optimal Sparse Fourier Transform
4,854
This work studies the problem of 2-dimensional searching for the 3-sided range query of the form $[a, b]\times (-\infty, c]$ in both main and external memory, by considering a variety of input distributions. We present three sets of solutions each of which examines the 3-sided problem in both RAM and I/O model respectively. The presented data structures are deterministic and the expectation is with respect to the input distribution.
Dynamic 3-sided Planar Range Queries with Expected Doubly Logarithmic Time
4,855
Given a string of characters, the Burrows-Wheeler Transform rearranges the characters in it so as to produce another string of the same length which is more amenable to compression techniques such as move to front, run-length encoding, and entropy encoders. We present a variant of the transform which gives rise to similar or better compression value, but, unlike the original, the transform we present is bijective, in that the inverse transformation exists for all strings. Our experiments indicate that using our variant of the transform gives rise to better compression ratio than the original Burrows-Wheeler transform. We also show that both the transform and its inverse can be computed in linear time and consuming linear storage.
A Bijective String Sorting Transform
4,856
Binary relations are an important abstraction arising in many data representation problems. The data structures proposed so far to represent them support just a few basic operations required to fit one particular application. We identify many of those operations arising in applications and generalize them into a wide set of desirable queries for a binary relation representation. We also identify reductions among those operations. We then introduce several novel binary relation representations, some simple and some quite sophisticated, that not only are space-efficient but also efficiently support a large subset of the desired queries.
Compact Binary Relation Representations with Rich Functionality
4,857
Given a set of points $P \subset \mathbb{R}^d$, the $k$-means clustering problem is to find a set of $k$ {\em centers} $C = \{c_1,...,c_k\}, c_i \in \mathbb{R}^d,$ such that the objective function $\sum_{x \in P} d(x,C)^2$, where $d(x,C)$ denotes the distance between $x$ and the closest center in $C$, is minimized. This is one of the most prominent objective functions that have been studied with respect to clustering. $D^2$-sampling \cite{ArthurV07} is a simple non-uniform sampling technique for choosing points from a set of points. It works as follows: given a set of points $P \subseteq \mathbb{R}^d$, the first point is chosen uniformly at random from $P$. Subsequently, a point from $P$ is chosen as the next sample with probability proportional to the square of the distance of this point to the nearest previously sampled points. $D^2$-sampling has been shown to have nice properties with respect to the $k$-means clustering problem. Arthur and Vassilvitskii \cite{ArthurV07} show that $k$ points chosen as centers from $P$ using $D^2$-sampling gives an $O(\log{k})$ approximation in expectation. Ailon et. al. \cite{AJMonteleoni09} and Aggarwal et. al. \cite{AggarwalDK09} extended results of \cite{ArthurV07} to show that $O(k)$ points chosen as centers using $D^2$-sampling give $O(1)$ approximation to the $k$-means objective function with high probability. In this paper, we further demonstrate the power of $D^2$-sampling by giving a simple randomized $(1 + \epsilon)$-approximation algorithm that uses the $D^2$-sampling in its core.
A simple D^2-sampling based PTAS for k-means and other Clustering Problems
4,858
In this paper, first we give a sequential linear-time algorithm for the longest path problem in meshes. This algorithm can be considered as an improvement of [13]. Then based on this sequential algorithm, we present a constant-time parallel algorithm for the problem which can be run on every parallel machine.
An efficient parallel algorithm for the longest path problem in meshes
4,859
A central problem in e-commerce is determining overlapping communities among individuals or objects in the absence of external identification or tagging. We address this problem by introducing a framework that captures the notion of communities or clusters determined by the relative affinities among their members. To this end we define what we call an affinity system, which is a set of elements, each with a vector characterizing its preference for all other elements in the set. We define a natural notion of (potentially overlapping) communities in an affinity system, in which the members of a given community collectively prefer each other to anyone else outside the community. Thus these communities are endogenously formed in the affinity system and are "self-determined" or "self-certified" by its members. We provide a tight polynomial bound on the number of self-determined communities as a function of the robustness of the community. We present a polynomial-time algorithm for enumerating these communities. Moreover, we obtain a local algorithm with a strong stochastic performance guarantee that can find a community in time nearly linear in the of size the community. Social networks fit particularly naturally within the affinity system framework -- if we can appropriately extract the affinities from the relatively sparse yet rich information from social networks, our analysis then yields a set of efficient algorithms for enumerating self-determined communities in social networks. In the context of social networks we also connect our analysis with results about $(\alpha,\beta)$-clusters introduced by Mishra, Schreiber, Stanton, and Tarjan \cite{msst}. In contrast with the polynomial bound we prove on the number of communities in the affinity system model, we show that there exists a family of networks with superpolynomial number of $(\alpha,\beta)$-clusters.
Finding Endogenously Formed Communities
4,860
This paper deals with the problem of computing, in an online fashion, a maximum benefit multi-commodity flow (\ONMCF), where the flow demands may be bigger than the edge capacities of the network. We present an online, deterministic, centralized, all-or-nothing, bi-criteria algorithm. The competitive ratio of the algorithm is constant, and the algorithm augments the capacities by at most a logarithmic factor. The algorithm can handle two types of flow requests: (i) low demand requests that must be routed along a path, and (ii) high demand requests that may be routed using a multi-path flow. Two extensions are discussed: requests with known durations and machine scheduling.
Online Multi-Commodity Flow with High Demands
4,861
Let C be a finite set of N elements and R = r_1,r_2,..., r_m a family of M subsets of C. A subset X of R verifies the Consecutive Ones Property (C1P) if there exists a permutation P of C such that each r_i in X is an interval of P. A Minimal Conflicting Set (MCS) S is a subset of R that does not verify the C1P, but such that any of its proper subsets does. In this paper, we present a new simpler and faster algorithm to decide if a given element r in R belongs to at least one MCS. Our algorithm runs in O(N^2M^2 + NM^7), largely improving the current O(M^6N^5 (M+N)^2 log(M+N)) fastest algorithm of [Blin {\em et al}, CSR 2011]. The new algorithm is based on an alternative approach considering minimal forbidden induced subgraphs of interval graphs instead of Tucker matrices.
Faster and Simpler Minimal Conflicting Set Identification
4,862
This technical report describes the implementation of exact and parametrized exponential algorithms, developed during the French ANR Agape during 2010-2012. The developed algorithms are distributed under the CeCILL license and have been written in Java using the Jung graph library.
Implementation of exponential and parametrized algorithms in the AGAPE project
4,863
Let k be a natural number. Let G be a graph and let N_1,...,N_k be k independent sets in G. The graph G is k-probe distance hereditary if G can be embedded into a DH-graph by adding edges between vertices that are contained in the same independent set. We show that there exists a polynomial-time algorithm to check if a graph G is k-probe distance hereditary.
k-Probe DH-graphs
4,864
Given a graph G and integers b and w. The black-and-white coloring problem asks if there exist disjoint sets of vertices B and W with |B|=b and |W|=w such that no vertex in B is adjacent to any vertex in W. In this paper we show that the problem is polynomial when restricted to permutation graphs.
The black-and-white coloring problem on permutation graphs
4,865
A hypergraph is a set V of vertices and a set of non-empty subsets of V, called hyperedges. Unlike graphs, hypergraphs can capture higher-order interactions in social and communication networks that go beyond a simple union of pairwise relationships. In this paper, we consider the shortest path problem in hypergraphs. We develop two algorithms for finding and maintaining the shortest hyperpaths in a dynamic network with both weight and topological changes. These two algorithms are the first to address the fully dynamic shortest path problem in a general hypergraph. They complement each other by partitioning the application space based on the nature of the change dynamics and the type of the hypergraph.
Dynamic Shortest Path Algorithms for Hypergraphs
4,866
In this paper we present the first algorithm in the streaming model to characterize completely the biconnectivity properties of undirected networks: articulation points, bridges, and connected and biconnected components. The motivation of our work was the development of a real-time algorithm to monitor the connectivity of the Autonomous Systems (AS) Network, but the solution provided is general enough to be applied to any network. The network structure is represented by a graph, and the algorithm is analyzed in the datastream framework. Here, as in the \emph{on-line} model, the input graph is revealed one item (i.e., graph edge) after the other, in an on-line fashion; but, if compared to traditional on-line computation, there are stricter requirements for both memory occupation and per item processing time. Our algorithm works by properly updating a forest over the graph nodes. All the graph (bi)connectivity properties are stored in this forest. We prove the correctness of the algorithm, together with its space ($O(n\,\log n)$, with $n$ being the number of nodes in the graph) and time bounds. We also present the results of a brief experimental evaluation against real-world graphs, including many samples of the AS network, ranging from medium to massive size. These preliminary experimental results confirm the effectiveness of our approach.
Real-Time Monitoring of Undirected Networks: Articulation Points, Bridges, and Connected and Biconnected Components
4,867
Let G be a graph and let N_1, ..., N_k be k independent sets in G. The graph G is a k-probe cograph if G can be embedded into a cograph by adding edges between vertices that are contained in the same independent set. We show that there exists an O(k n^5) algorithm to check if a graph G is a k-probe cograph.
A note on probe cographs
4,868
We address a version of the set-cover problem where we do not know the sets initially (and hence referred to as covert) but we can query an element to find out which sets contain this element as well as query a set to know the elements. We want to find a small set-cover using a minimal number of such queries. We present a Monte Carlo randomized algorithm that approximates an optimal set-cover of size $OPT$ within $O(\log N)$ factor with high probability using $O(OPT \cdot \log^2 N)$ queries where $N$ is the input size. We apply this technique to the network discovery problem that involves certifying all the edges and non-edges of an unknown $n$-vertices graph based on layered-graph queries from a minimal number of vertices. By reducing it to the covert set-cover problem we present an $O(\log^2 n)$-competitive Monte Carlo randomized algorithm for the covert version of network discovery problem. The previously best known algorithm has a competitive ratio of $\Omega (\sqrt{n\log n})$ and therefore our result achieves an exponential improvement.
The covert set-cover problem with application to Network Discovery
4,869
Many problems in bioinformatics are about finding strings that approximately represent a collection of given strings. We look at more general problems where some input strings can be classified as outliers. The Close to Most Strings problem is, given a set S of same-length strings, and a parameter d, find a string x that maximizes the number of "non-outliers" within Hamming distance d of x. We prove this problem has no PTAS unless ZPP=NP, correcting a decade-old mistake. The Most Strings with Few Bad Columns problem is to find a maximum-size subset of input strings so that the number of non-identical positions is at most k; we show it has no PTAS unless P=NP. We also observe Closest to k Strings has no EPTAS unless W[1]=FPT. In sum, outliers help model problems associated with using biological data, but we show the problem of finding an approximate solution is computationally difficult.
On Approximating String Selection Problems with Outliers
4,870
The alternation of existential and universal quantifiers in a quantified boolean formula (QBF) generates dependencies among variables that must be respected when evaluating the formula. Dependency schemes provide a general framework for representing such dependencies. Since it is generally intractable to determine dependencies exactly, a set of potential dependencies is computed instead, which may include false positives. Among the schemes proposed so far, resolution-path dependencies introduce the fewest spurious dependencies. In this work, we describe an algorithm that detects resolution-path dependencies in linear time, resolving a problem posed by Van Gelder (CP 2011).
Computing Resolution-Path Dependencies in Linear Time
4,871
Bille and G{\o}rtz (2011) recently introduced the problem of substring range counting, for which we are asked to store compactly a string $S$ of $n$ characters with integer labels in ([0, u]), such that later, given an interval ([a, b]) and a pattern $P$ of length $m$, we can quickly count the occurrences of $P$ whose first characters' labels are in ([a, b]). They showed how to store $S$ in $\Oh{n \log n / \log \log n}$ space and answer queries in $\Oh{m + \log \log u}$ time. We show that, if $S$ is over an alphabet of size (\polylog (n)), then we can achieve optimal linear space. Moreover, if (u = n \polylog (n)), then we can also reduce the time to $\Oh{m}$. Our results give linear space and time bounds for position-restricted substring counting and the counting versions of indexing substrings with intervals, indexing substrings with gaps and aligned pattern matching.
Linear-Space Substring Range Counting over Polylogarithmic Alphabets
4,872
We present an efficient algorithm for calculating $q$-gram frequencies on strings represented in compressed form, namely, as a straight line program (SLP). Given an SLP $\mathcal{T}$ of size $n$ that represents string $T$, the algorithm computes the occurrence frequencies of all $q$-grams in $T$, by reducing the problem to the weighted $q$-gram frequencies problem on a trie-like structure of size $m = |T|-\mathit{dup}(q,\mathcal{T})$, where $\mathit{dup}(q,\mathcal{T})$ is a quantity that represents the amount of redundancy that the SLP captures with respect to $q$-grams. The reduced problem can be solved in linear time. Since $m = O(qn)$, the running time of our algorithm is $O(\min\{|T|-\mathit{dup}(q,\mathcal{T}),qn\})$, improving our previous $O(qn)$ algorithm when $q = \Omega(|T|/n)$.
Speeding-up $q$-gram mining on grammar-based compressed texts
4,873
The maximum multicommodity flow problem is a natural generalization of the maximum flow problem to route multiple distinct flows. Obtaining a $1-\epsilon$ approximation to the multicommodity flow problem on graphs is a well-studied problem. In this paper we present an adaptation of recent advances in single-commodity flow algorithms to this problem. As the underlying linear systems in the electrical problems of multicommodity flow problems are no longer Laplacians, our approach is tailored to generate specialized systems which can be preconditioned and solved efficiently using Laplacians. Given an undirected graph with m edges and k commodities, we give algorithms that find $1-\epsilon$ approximate solutions to the maximum concurrent flow problem and the maximum weighted multicommodity flow problem in time $\tilde{O}(m^{4/3}\poly(k,\epsilon^{-1}))$.
Faster Approximate Multicommodity Flow Using Quadratically Coupled Flows
4,874
We investigate the problem of deterministic pattern matching in multiple streams. In this model, one symbol arrives at a time and is associated with one of s streaming texts. The task at each time step is to report if there is a new match between a fixed pattern of length m and a newly updated stream. As is usual in the streaming context, the goal is to use as little space as possible while still reporting matches quickly. We give almost matching upper and lower space bounds for three distinct pattern matching problems. For exact matching we show that the problem can be solved in constant time per arriving symbol and O(m+s) words of space. For the k-mismatch and k-difference problems we give O(k) time solutions that require O(m+ks) words of space. In all three cases we also give space lower bounds which show our methods are optimal up to a single logarithmic factor. Finally we set out a number of open problems related to this new model for pattern matching.
Pattern Matching in Multiple Streams
4,875
We consider basic problems of non-preemptive scheduling on uniformly related machines. For a given schedule, defined by a partition of the jobs into m subsets corresponding to the m machines, C_i denotes the completion time of machine i. Our goal is to find a schedule which minimizes or maximizes \sum_{i=1}^m C_i^p for a fixed value of p such that 0<p<\infty. For p>1 the minimization problem is equivalent to the well-known problem of minimizing the \ell_p norm of the vector of the completion times of the machines, and for 0<p<1 the maximization problem is of interest. Our main result is an efficient polynomial time approximation scheme (EPTAS) for each one of these problems. Our schemes use a non-standard application of the so-called shifting technique. We focus on the work (total size of jobs) assigned to each machine and introduce intervals of forbidden work. These intervals are defined so that the resulting effect on the goal function is sufficiently small. This allows the partition of the problem into sub-problems (with subsets of machines and jobs) whose solutions are combined into the final solution using dynamic programming. Our results are the first EPTAS's for this natural class of load balancing problems.
An efficient polynomial time approximation scheme for load balancing on uniformly related machines
4,876
We study a new variant of the string matching problem called cross-document string matching, which is the problem of indexing a collection of documents to support an efficient search for a pattern in a selected document, where the pattern itself is a substring of another document. Several variants of this problem are considered, and efficient linear-space solutions are proposed with query time bounds that either do not depend at all on the pattern size or depend on it in a very limited way (doubly logarithmic). As a side result, we propose an improved solution to the weighted level ancestor problem.
Cross-Document Pattern Matching
4,877
We study streaming algorithms for the interval selection problem: finding a maximum cardinality subset of disjoint intervals on the line. A deterministic 2-approximation streaming algorithm for this problem is developed, together with an algorithm for the special case of proper intervals, achieving improved approximation ratio of 3/2. We complement these upper bounds by proving that they are essentially best possible in the streaming setting: it is shown that an approximation ratio of $2 - \epsilon$ (or $3 / 2 - \epsilon$ for proper intervals) cannot be achieved unless the space is linear in the input size. In passing, we also answer an open question of Adler and Azar \cite{AdlerAzar03} regarding the space complexity of constant-competitive randomized preemptive online algorithms for the same problem.
Space-Constrained Interval Selection
4,878
In the reordering buffer management problem (RBM) a sequence of $n$ colored items enters a buffer with limited capacity $k$. When the buffer is full, one item is removed to the output sequence, making room for the next input item. This step is repeated until the input sequence is exhausted and the buffer is empty. The objective is to find a sequence of removals that minimizes the total number of color changes in the output sequence. The problem formalizes numerous applications in computer and production systems, and is known to be NP-hard. We give the first constant factor approximation guarantee for RBM. Our algorithm is based on an intricate "rounding" of the solution to an LP relaxation for RBM, so it also establishes a constant upper bound on the integrality gap of this relaxation. Our results improve upon the best previous bound of $O(\sqrt{\log k})$ of Adamaszek et al. (STOC 2011) that used different methods and gave an online algorithm. Our constant factor approximation beats the super-constant lower bounds on the competitive ratio given by Adamaszek et al. This is the first demonstration of an offline algorithm for RBM that is provably better than any online algorithm.
A Constant Factor Approximation Algorithm for Reordering Buffer Management
4,879
Given a planar triangulation, a 3-orientation is an orientation of the internal edges so all internal vertices have out-degree three. Each 3-orientation gives rise to a unique edge coloring known as a Schnyder wood that has proven powerful for various computing and combinatorics applications. We consider natural Markov chains for sampling uniformly from the set of 3-orientations. First, we study a "triangle-reversing" chain on the space of 3-orientations of a fixed triangulation that reverses the orientation of the edges around a triangle in each move. It was shown previously that this chain connects the state space and we show that (i) when restricted to planar triangulations of maximum degree six, the Markov chain is rapidly mixing, and (ii) there exists a triangulation with high degree on which this Markov chain mixes slowly. Next, we consider an "edge-flipping" chain on the larger state space consisting of 3-orientations of all planar triangulations on a fixed number of vertices. It was also shown previously that this chain connects the state space and we prove that the chain is always rapidly mixing. The triangle-reversing and edge-flipping Markov chains both arise in the context of sampling other combinatorial structures, such as Eulerian orientations and triangulations of planar point sets, so our results here may shed light on the mixing rate of these related chains as well.
Algorithms for Sampling 3-Orientations of Planar Triangulations
4,880
In a recent provocative paper, Lamport points out "the insubstantiality of processes" by proving the equivalence of two different decompositions of the same intuitive algorithm by means of temporal formulas. We point out that the correct equivalence of algorithms is itself in the eye of the beholder. We discuss a number of related issues and, in particular, whether algorithms can be proved equivalent directly.
Equivalence is in the Eye of the Beholder
4,881
We describe the architecture of an evolving algebra partial evaluator, a program which specializes an evolving algebra with respect to a portion of its input. We discuss the particular analysis, specialization, and optimization techniques used and show an example of its use.
An Offline Partial Evaluator for Evolving Algebras
4,882
We give an evolving algebra solution for the well-known railroad crossing problem and use the occasion to experiment with agents that perform instantaneous actions in continuous time and in particular with agents that fire at the moment they are enabled.
The Railroad Crossing Problem: An Experiment with Instantaneous Actions and Immediate Reactions
4,883
The second Software Engineering Institute Product Line Practice Workshop was a hands-on meeting held in November 1997 to share industry practices in software product lines and to explore the technical and non-technical issues involved. This report synthesizes the workshop presentations and discussions, which identified factors involved in product line practices and analyzed issues in the areas of software engineering, technical management, and enterprise management.
Second Product Line Practice Workshop Report
4,884
Systematic testing of object-oriented software turned out to be much more complex than testing conventional software. Especially the highly incremental and iterative development cycle demands both many more changes and partially implemented resp. re-implemented classes. Much more integration and regression testing has to be done to reach stable stages during the development. In this presentation we propose a diagram capturing all possible dependencies and interactions in an object-oriented program. Then we give algorithms and coverage criteria to identify integration resp. regression test strategys and all test cases to be executed after some implementation resp. modification activities. Finally, we summarize some practical experiences and heuristics.
Managing Object-Oriented Integration and Regression Testing
4,885
Scripting languages are becoming more and more important as a tool for software development, as they provide great flexibility for rapid prototyping and for configuring componentware applications. In this paper we present LuaJava, a scripting tool for Java. LuaJava adopts Lua, a dynamically typed interpreted language, as its script language. Great emphasis is given to the transparency of the integration between the two languages, so that objects from one language can be used inside the other like native objects. The final result of this integration is a tool that allows the construction of configurable Java applications, using off-the-shelf components, in a high abstraction level.
LuaJava - A Scripting Tool for Java
4,886
This paper gives an overview of SCR3 -- a toolset designed to increase the usability of formal methods for software development. Formal requirements are specified in SCR3 in an easy to use and review format, and then used in checking requirements for correctness and in verifying consistency between annotated code and requirements. In this paper we discuss motivations behind this work, describe several tools which are part of SCR3, and illustrate their operation on an example of a Cruise Control system.
SCR3: towards usability of formal methods
4,887
For over a decade, researchers in formal methods tried to create formalisms that permit natural specification of systems and allow mathematical reasoning about their correctness. The availability of fully-automated reasoning tools enables more non-specialists to use formal methods effectively --- their responsibility reduces to just specifying the model and expressing the desired properties. Thus, it is essential that these properties be represented in a language that is easy to use and sufficiently expressive. Linear-time temporal logic is a formalism that has been extensively used by researchers for specifying properties of systems. When such properties are closed under stuttering, i.e. their interpretation is not modified by transitions that leave the system in the same state, verification tools can utilize a partial-order reduction technique to reduce the size of the model and thus analyze larger systems. If LTL formulas do not contain the ``next'' operator, the formulas are closed under stuttering, but the resulting language is not expressive enough to capture many important properties, e.g., properties involving events. Determining if an arbitrary LTL formula is closed under stuttering is hard --- it has been proven to be PSPACE-complete. In this paper we relax the restriction on LTL that guarantees closure under stuttering, introduce the notion of edges in the context of LTL, and provide theorems that enable syntactic reasoning about closure under stuttering of LTL formulas.
Events in Linear-Time Properties
4,888
This paper describes a case study conducted in collaboration with Nortel to demonstrate the feasibility of applying formal modeling techniques to telecommunication systems. A formal description language, SDL, was chosen by our qualitative CASE tool evaluation to model a multimedia-messaging system described by an 80-page natural language specification. Our model was used to identify errors in the software requirements document and to derive test suites, shadowing the existing development process and keeping track of a variety of productivity data.
Formal Modeling in a Commercial Setting: A Case Study
4,889
A reasonable C++ Java Native Interface (JNI) technique termed C++ Wrappered JNI (C++WJ) is presented. The technique simplifies current error-prone JNI development by wrappering JNI calls. Provided development is done with the aid of a C++ compiler, C++WJ offers type checking and behind the scenes caching. A tool (jH) patterned on javah automates the creation of C++WJ classes. The paper presents the rationale behind the choices that led to C++WJ. Handling of Java class and interface hierarchy including Java type downcasts is discussed. Efficiency considerations in the C++WJ lead to two flavors of C++ classes: jtypes and Jtypes. A jtype is a lightweight less than full wrapper of a JNI object reference. A Jtype is a heavyweight full wrapper of a JNI object reference.
A Reasonable C++ Wrappered Java Native Interface
4,890
One of the key concepts in testing is that of adequate test sets. A test selection criterion decides which test sets are adequate. In this paper, a language schema for specifying a large class of test selection criteria is developed; the schema is based on two operations for building complex criteria from simple ones. Basic algebraic properties of the two operations are derived. In the second part of the paper, a simple language-an instance of the general schema-is studied in detail, with the goal of generating small adequate test sets automatically. It is shown that one version of the problem is intractable, while another is solvable by an efficient algorithm. An implementation of the algorithm is described.
Computation in an algebra of test selection criteria
4,891
Although attribute grammars are commonly used for compiler construction, little investigation has been conducted on debugging attribute grammars. The paper proposes two types of systematic debugging methods, an algorithmic debugging and slice-based debugging, both tailored for attribute grammars. By means of query-based interaction with the developer, our debugging methods effectively narrow the potential bug space in the attribute grammar description and eventually identify the incorrect attribution rule. We have incorporated this technology in our visual debugging tool called Aki.
Systematic Debugging of Attribute Grammars
4,892
A program fails. Under which circumstances does this failure occur? One single algorithm, the delta debugging algorithm, suffices to determine these failure-inducing circumstances. Delta debugging tests a program systematically and automatically to isolate failure-inducing circumstances such as the program input, changes to the program code, or executed statements.
Finding Failure Causes through Automated Testing
4,893
Slicing is a program analysis technique originally developed for imperative languages. It facilitates understanding of data flow and debugging. This paper discusses slicing of Constraint Logic Programs. Constraint Logic Programming (CLP) is an emerging software technology with a growing number of applications. Data flow in constraint programs is not explicit, and for this reason the concepts of slice and the slicing techniques of imperative languages are not directly applicable. This paper formulates declarative notions of slice suitable for CLP. They provide a basis for defining slicing techniques (both dynamic and static) based on variable sharing. The techniques are further extended by using groundness information. A prototype dynamic slicer of CLP programs implementing the presented ideas is briefly described together with the results of some slicing experiments.
Slicing of Constraint Logic Programs
4,894
Software architecture is receiving increasingly attention as a critical design level for software systems. As software architecture design resources (in the form of architectural specifications) are going to be accumulated, the development of techniques and tools to support architectural understanding, testing, reengineering, maintenance, and reuse will become an important issue. This paper introduces a new form of slicing, named architectural slicing, to aid architectural understanding and reuse. In contrast to traditional slicing, architectural slicing is designed to operate on the architectural specification of a software system, rather than the source code of a program. Architectural slicing provides knowledge about the high-level structure of a software system, rather than the low-level implementation details of a program. In order to compute an architectural slice, we present the architecture information flow graph which can be used to represent information flows in a software architecture. Based on the graph, we give a two-phase algorithm to compute an architectural slice.
Applying Slicing Technique to Software Architectures
4,895
Software architecture is receiving increasingly attention as a critical design level for software systems. As software architecture design resources (in the form of architectural descriptions) are going to be accumulated, the development of techniques and tools to support architectural understanding, testing, reengineering, maintaining, and reusing will become an important issue. In this paper we introduce a new dependence analysis technique, named architectural dependence analysis to support software architecture development. In contrast to traditional dependence analysis, architectural dependence analysis is designed to operate on an architectural description of a software system, rather than the source code of a conventional program. Architectural dependence analysis provides knowledge of dependences for the high-level architecture of a software system, rather than the low-level implementation details of a conventional program.
Using Dependence Analysis to Support Software Architecture Understanding
4,896
This paper proposes some new architectural metrics which are appropriate for evaluating the architectural attributes of a software system. The main feature of our approach is to assess the complexity of a software architecture by analyzing various types of architectural dependences in the architecture.
On Assessing the Complexity of Software Architectures
4,897
Test suites are designed to validate the operation of a system against requirements. One important aspect of a test suite design is to ensure that system operation logic is tested completely. A test suite should drive a system through all abstract states to exercise all possible cases of its operation. This is a difficult task. Code coverage tools support test suite designers by providing the information about which parts of source code are covered during system execution. Unfortunately, code coverage tools produce only source code coverage information. For a test engineer it is often hard to understand what the noncovered parts of the source code do and how they relate to requirements. We propose a generic approach that provides design coverage of the executed software simplifying the development of new test suites. We demonstrate our approach on common design abstractions such as statecharts, activity diagrams, message sequence charts and structure diagrams. We implement the design coverage using Third Eye tracing and trace analysis framework. Using design coverage, test suites could be created faster by focussing on untested design elements.
Tracing Execution of Software for Design Coverage
4,898
The strategy used to develop the NIF Integrated Computer Control System (ICCS) calls for incremental cycles of construction and formal test to deliver a total of 1 million lines of code. Each incremental release takes four to six months to implement specific functionality and culminates when offline tests conducted in the ICCS Integration and Test Facility verify functional, performance, and interface requirements. Tests are then repeated on line to confirm integrated operation in dedicated laser laboratories or ultimately in the NIF. Test incidents along with other change requests are recorded and tracked to closure by the software change control board (SCCB). Annual independent audits advise management on software process improvements. Extensive experience has been gained by integrating controls in the prototype laser preamplifier laboratory. The control system installed in the preamplifier lab contains five of the ten planned supervisory subsystems and seven of sixteen planned front-end processors (FEPs). Beam alignment, timing, diagnosis and laser pulse amplification up to 20 joules was tested through an automated series of shots. Other laboratories have provided integrated testing of six additional FEPs. Process measurements including earned-value, product size, and defect densities provide software project controls and generate confidence that the control system will be successfully deployed.
Quality Control, Testing and Deployment Results in NIF ICCS
4,899