text
stringlengths
17
3.36M
source
stringlengths
3
333
__index_level_0__
int64
0
518k
Acceleration in symbolic verification consists in computing the exact effect of some control-flow loops in order to speed up the iterative fix-point computation of reachable states. Even if no termination guarantee is provided in theory, successful results were obtained in practice by different tools implementing this framework. In this paper, the acceleration framework is extended to data-flow analysis. Compared to a classical widening/narrowing-based abstract interpretation, the loss of precision is controlled here by the choice of the abstract domain and does not depend on the way the abstract value is computed. Our approach is geared towards precision, but we don't loose efficiency on the way. Indeed, we provide a cubic-time acceleration-based algorithm for solving interval constraints with full multiplication.
Accelerated Data-Flow Analysis
4,200
Arithmetic automata recognize infinite words of digits denoting decompositions of real and integer vectors. These automata are known expressive and efficient enough to represent the whole set of solutions of complex linear constraints combining both integral and real variables. In this paper, the closed convex hull of arithmetic automata is proved rational polyhedral. Moreover an algorithm computing the linear constraints defining these convex set is provided. Such an algorithm is useful for effectively extracting geometrical properties of the whole set of solutions of complex constraints symbolically represented by arithmetic automata.
Convex Hull of Arithmetic Automata
4,201
How many random entries of an n by m, rank r matrix are necessary to reconstruct the matrix within an accuracy d? We address this question in the case of a random matrix with bounded rank, whereby the observed entries are chosen uniformly at random. We prove that, for any d>0, C(r,d)n observations are sufficient. Finally we discuss the question of reconstructing the matrix efficiently, and demonstrate through extensive simulations that this task can be accomplished in nPoly(log n) operations, for small rank.
Learning Low Rank Matrices from O(n) Entries
4,202
For a static array A of n ordered objects, a range minimum query asks for the position of the minimum between two specified array indices. We show how to preprocess A into a scheme of size 2n+o(n) bits that allows to answer range minimum queries on A in constant time. This space is asymptotically optimal in the important setting where access to A is not permitted after the preprocessing step. Our scheme can be computed in linear time, using only n + o(n) additional bits at construction time. In interesting by-product is that we also improve on LCA-computation in BPS- or DFUDS-encoded trees.
Optimal Succinctness for Range Minimum Queries
4,203
We give a priority queue that achieves the same amortized bounds as Fibonacci heaps. Namely, find-min requires O(1) worst-case time, insert, meld and decrease-key require O(1) amortized time, and delete-min requires $O(\log n)$ amortized time. Our structure is simple and promises an efficient practical behavior when compared to other known Fibonacci-like heaps. The main idea behind our construction is to propagate rank updates instead of performing cascaded cuts following a decrease-key operation, allowing for a relaxed structure.
The Violation Heap: A Relaxed Fibonacci-Like Heap
4,204
A minimax tree is similar to a Huffman tree except that, instead of minimizing the weighted average of the leaves' depths, it minimizes the maximum of any leaf's weight plus its depth. Golumbic (1976) introduced minimax trees and gave a Huffman-like, $\Oh{n \log n}$-time algorithm for building them. Drmota and Szpankowski (2002) gave another $\Oh{n \log n}$-time algorithm, which checks the Kraft Inequality in each step of a binary search. In this paper we show how Drmota and Szpankowski's algorithm can be made to run in linear time on a word RAM with (\Omega (\log n))-bit words. We also discuss how our solution applies to problems in data compression, group testing and circuit design.
Minimax Trees in Linear Time
4,205
The 2008 Workshop on Algorithms for Modern Massive Data Sets (MMDS 2008), sponsored by the NSF, DARPA, LinkedIn, and Yahoo!, was held at Stanford University, June 25--28. The goals of MMDS 2008 were (1) to explore novel techniques for modeling and analyzing massive, high-dimensional, and nonlinearly-structured scientific and internet data sets; and (2) to bring together computer scientists, statisticians, mathematicians, and data analysis practitioners to promote cross-fertilization of ideas.
Algorithmic and Statistical Challenges in Modern Large-Scale Data Analysis are the Focus of MMDS 2008
4,206
We consider the problem of selecting the best subset of exactly $k$ columns from an $m \times n$ matrix $A$. We present and analyze a novel two-stage algorithm that runs in $O(\min\{mn^2,m^2n\})$ time and returns as output an $m \times k$ matrix $C$ consisting of exactly $k$ columns of $A$. In the first (randomized) stage, the algorithm randomly selects $\Theta(k \log k)$ columns according to a judiciously-chosen probability distribution that depends on information in the top-$k$ right singular subspace of $A$. In the second (deterministic) stage, the algorithm applies a deterministic column-selection procedure to select and return exactly $k$ columns from the set of columns selected in the first stage. Let $C$ be the $m \times k$ matrix containing those $k$ columns, let $P_C$ denote the projection matrix onto the span of those columns, and let $A_k$ denote the best rank-$k$ approximation to the matrix $A$. Then, we prove that, with probability at least 0.8, $$ \FNorm{A - P_CA} \leq \Theta(k \log^{1/2} k) \FNorm{A-A_k}. $$ This Frobenius norm bound is only a factor of $\sqrt{k \log k}$ worse than the best previously existing existential result and is roughly $O(\sqrt{k!})$ better than the best previous algorithmic result for the Frobenius norm version of this Column Subset Selection Problem (CSSP). We also prove that, with probability at least 0.8, $$ \TNorm{A - P_CA} \leq \Theta(k \log^{1/2} k)\TNorm{A-A_k} + \Theta(k^{3/4}\log^{1/4}k)\FNorm{A-A_k}. $$ This spectral norm bound is not directly comparable to the best previously existing bounds for the spectral norm version of this CSSP. Our bound depends on $\FNorm{A-A_k}$, whereas previous results depend on $\sqrt{n-k}\TNorm{A-A_k}$; if these two quantities are comparable, then our bound is asymptotically worse by a $(k \log k)^{1/4}$ factor.
An Improved Approximation Algorithm for the Column Subset Selection Problem
4,207
In the Survivable Network Design problem (SNDP), we are given an undirected graph $G(V,E)$ with costs on edges, along with a connectivity requirement $r(u,v)$ for each pair $u,v$ of vertices. The goal is to find a minimum-cost subset $E^*$ of edges, that satisfies the given set of pairwise connectivity requirements. In the edge-connectivity version we need to ensure that there are $r(u,v)$ edge-disjoint paths for every pair $u, v$ of vertices, while in the vertex-connectivity version the paths are required to be vertex-disjoint. The edge-connectivity version of SNDP is known to have a 2-approximation. However, no non-trivial approximation algorithm has been known so far for the vertex version of SNDP, except for special cases of the problem. We present an extremely simple algorithm to achieve an $O(k^3 \log n)$-approximation for this problem, where $k$ denotes the maximum connectivity requirement, and $n$ denotes the number of vertices. We also give a simple proof of the recently discovered $O(k^2 \log n)$-approximation result for the single-source version of vertex-connectivity SNDP. We note that in both cases, our analysis in fact yields slightly better guarantees in that the $\log n$ term in the approximation guarantee can be replaced with a $\log \tau$ term where $\tau$ denotes the number of distinct vertices that participate in one or more pairs with a positive connectivity requirement.
An $O(k^{3} log n)$-Approximation Algorithm for Vertex-Connectivity Survivable Network Design
4,208
Constrained least-squares regression problems, such as the Nonnegative Least Squares (NNLS) problem, where the variables are restricted to take only nonnegative values, often arise in applications. Motivated by the recent development of the fast Johnson-Lindestrauss transform, we present a fast random projection type approximation algorithm for the NNLS problem. Our algorithm employs a randomized Hadamard transform to construct a much smaller NNLS problem and solves this smaller problem using a standard NNLS solver. We prove that our approach finds a nonnegative solution vector that, with high probability, is close to the optimum nonnegative solution in a relative error approximation sense. We experimentally evaluate our approach on a large collection of term-document data and verify that it does offer considerable speedups without a significant loss in accuracy. Our analysis is based on a novel random projection type result that might be of independent interest. In particular, given a tall and thin matrix $\Phi \in \mathbb{R}^{n \times d}$ ($n \gg d$) and a vector $y \in \mathbb{R}^d$, we prove that the Euclidean length of $\Phi y$ can be estimated very accurately by the Euclidean length of $\tilde{\Phi}y$, where $\tilde{\Phi}$ consists of a small subset of (appropriately rescaled) rows of $\Phi$.
Random Projections for the Nonnegative Least-Squares Problem
4,209
In the k-Apex problem the task is to find at most k vertices whose deletion makes the given graph planar. The graphs for which there exists a solution form a minor closed class of graphs, hence by the deep results of Robertson and Seymour, there is an O(n^3) time algorithm for every fixed value of k. However, the proof is extremely complicated and the constants hidden by the big-O notation are huge. Here we give a much simpler algorithm for this problem with quadratic running time, by iteratively reducing the input graph and then applying techniques for graphs of bounded treewidth.
Obtaining a Planar Graph by Vertex Deletion
4,210
Given a set of $m$ agents and a set of $n$ items, where agent $A$ has utility $u_{A,i}$ for item $i$, our goal is to allocate items to agents to maximize fairness. Specifically, the utility of an agent is the sum of its utilities for items it receives, and we seek to maximize the minimum utility of any agent. While this problem has received much attention recently, its approximability has not been well-understood thus far: the best known approximation algorithm achieves an $\tilde{O}(\sqrt{m})$-approximation, and in contrast, the best known hardness of approximation stands at 2. Our main result is an approximation algorithm that achieves an $\tilde{O}(n^{\eps})$ approximation for any $\eps=\Omega(\log\log n/\log n)$ in time $n^{O(1/\eps)}$. In particular, we obtain poly-logarithmic approximation in quasi-polynomial time, and for any constant $\eps > 0$, we obtain $O(n^{\eps})$ approximation in polynomial time. An interesting aspect of our algorithm is that we use as a building block a linear program whose integrality gap is $\Omega(\sqrt m)$. We bypass this obstacle by iteratively using the solutions produced by the LP to construct new instances with significantly smaller integrality gaps, eventually obtaining the desired approximation. We also investigate the special case of the problem, where every item has a non-zero utility for at most two agents. We show that even in this restricted setting the problem is hard to approximate upto any factor better tha 2, and show a factor $(2+\eps)$-approximation algorithm running in time $poly(n,1/\eps)$ for any $\eps>0$. This special case can be cast as a graph edge orientation problem, and our algorithm can be viewed as a generalization of Eulerian orientations to weighted graphs.
On Allocating Goods to Maximize Fairness
4,211
We study generalized fixed-point equations over idempotent semirings and provide an efficient algorithm for the detection whether a sequence of Kleene's iterations stabilizes after a finite number of steps. Previously known approaches considered only bounded semirings where there are no infinite descending chains. The main novelty of our work is that we deal with semirings without the boundedness restriction. Our study is motivated by several applications from interprocedural dataflow analysis. We demonstrate how the reachability problem for weighted pushdown automata can be reduced to solving equations in the framework mentioned above and we describe a few applications to demonstrate its usability.
Interprocedural Dataflow Analysis over Weight Domains with Infinite Descending Chains
4,212
Given a sequence A of 2n real numbers, the Even-Rank-Sum problem asks for the sum of the n values that are at the even positions in the sorted order of the elements in A. We prove that, in the algebraic computation-tree model, this problem has time complexity \Theta(n log n). This solves an open problem posed by Michael Shamos at the Canadian Conference on Computational Geometry in 2008.
An Ω(n log n) lower bound for computing the sum of even-ranked elements
4,213
We consider the following problem: given an unsorted array of $n$ elements, and a sequence of intervals in the array, compute the median in each of the subarrays defined by the intervals. We describe a simple algorithm which uses O(n) space and needs $O(n\log k + k\log n)$ time to answer the first $k$ queries. This improves previous algorithms by a logarithmic factor and matches a lower bound for $k=O(n)$. Since the algorithm decomposes the range of element values rather than the array, it has natural generalizations to higher dimensional problems -- it reduces a range median query to a logarithmic number of range counting queries.
Towards Optimal Range Medians
4,214
We present here some results on particular elimination schemes for chordal graphs, namely we show that for any chordal graph we can construct in linear time a simplicial elimination scheme starting with a pending maximal clique attached via a minimal separator maximal (resp. minimal) under inclusion among all minimal separators.
On some simplicial elimination schemes for chordal graphs
4,215
Let pi_w denote the failure function of the Morris-Pratt algorithm for a word w. In this paper we study the following problem: given an integer array A[1..n], is there a word w over arbitrary alphabet such that A[i]=pi_w[i] for all i? Moreover, what is the minimum required cardinality of the alphabet? We give a real time linear algorithm for this problem in the unit-cost RAM model with \Theta(log n) bits word size. Our algorithm returns a word w over minimal alphabet such that pi_w = A as well and uses just o(n) words of memory. Then we consider function pi' instead of pi and give an online O(n log n) algorithm for this case. This is the first polynomial algorithm for online version of this problem.
Online validation of the pi and pi' failure functions
4,216
In this paper, we have developed a fully-dynamic algorithm for maintaining cardinality of maximum-matching in a tree using the construction of top-trees. The time complexities are as follows: 1. Initialization Time: $O(n(log(n)))$ to build the Top-tree. 2. Update Time: $O(log(n))$ 3. Query Time: O(1) to query the cardinality of maximum-matching and $O(log(n))$ to find if a particular edge is matched.
An O(log(n)) Fully Dynamic Algorithm for Maximum matching in a tree
4,217
We study the weighted generalization of the edge coloring problem where the weight of each color class (matching) equals to the weight of its heaviest edge and the goal is to minimize the sum of the colors' weights. We present a 3/2-approximation algorithm for trees.
Max Edge Coloring of Trees
4,218
Collaborative editing consists in editing a common document shared by several independent sites. This may give rise to conficts when two different users perform simultaneous uncompatible operations. Centralized systems solve this problem by using locks that prevent some modifications to occur and leave the resolution of confict to users. On the contrary, peer to peer (P2P) editing doesn't allow locks and the optimistic approach uses a Integration Transformation IT that reconciliates the conficting operations and ensures convergence (all copies are identical on each site). Two properties TP1 and TP2, relating the set of allowed operations Op and the transformation IT, have been shown to ensure the correctness of the process. The choice of the set Op is crucial to define an integration operation that satisfies TP1 and TP2. Many existing algorithms don't satisfy these properties and are indeed incorrect i.e. convergence is not guaranteed. No algorithm enjoying both properties is known for strings and little work has been done for XML trees in a pure P2P framework (that doesn't use time-stamps for instance). We focus on editing unranked unordered labeled trees, so-called XML-like trees that are considered for instance in the Harmony pro ject. We show that no transformation satisfying TP1 and TP2 can exist for a first set of operations but we show that TP1 and TP2 hold for a richer set of operations. We show how to combine our approach with any convergent editing process on strings (not necessarily based on integration transformation) to get a convergent process.
Peer to Peer Optimistic Collaborative Editing on XML-like trees
4,219
Analyzing massive data sets has been one of the key motivations for studying streaming algorithms. In recent years, there has been significant progress in analysing distributions in a streaming setting, but the progress on graph problems has been limited. A main reason for this has been the existence of linear space lower bounds for even simple problems such as determining the connectedness of a graph. However, in many new scenarios that arise from social and other interaction networks, the number of vertices is significantly less than the number of edges. This has led to the formulation of the semi-streaming model where we assume that the space is (near) linear in the number of vertices (but not necessarily the edges), and the edges appear in an arbitrary (and possibly adversarial) order. In this paper we focus on graph sparsification, which is one of the major building blocks in a variety of graph algorithms. There has been a long history of (non-streaming) sampling algorithms that provide sparse graph approximations and it a natural question to ask if the sparsification can be achieved using a small space, and in addition using a single pass over the data? The question is interesting from the standpoint of both theory and practice and we answer the question in the affirmative, by providing a one pass $\tilde{O}(n/\epsilon^{2})$ space algorithm that produces a sparsification that approximates each cut to a $(1+\epsilon)$ factor. We also show that $\Omega(n \log \frac1\epsilon)$ space is necessary for a one pass streaming algorithm to approximate the min-cut, improving upon the $\Omega(n)$ lower bound that arises from lower bounds for testing connectivity.
Graph Sparsification in the Semi-streaming Model
4,220
We explore various techniques to compress a permutation $\pi$ over n integers, taking advantage of ordered subsequences in $\pi$, while supporting its application $\pi$(i) and the application of its inverse $\pi^{-1}(i)$ in small time. Our compression schemes yield several interesting byproducts, in many cases matching, improving or extending the best existing results on applications such as the encoding of a permutation in order to support iterated applications $\pi^k(i)$ of it, of integer functions, and of inverted lists and suffix arrays.
Compressed Representations of Permutations, and Applications
4,221
We study online nonclairvoyant speed scaling to minimize total flow time plus energy. We first consider the traditional model where the power function is P (s) = s\^\propto. We give a nonclairvoyant algorithm that is shown to be O(\propto\^3)-competitive. We then show an \Omega(\propto\^(1/3-\epsilon)) lower bound on the competitive ratio of any nonclairvoyant algorithm. We also show that there are power functions for which no nonclairvoyant algorithm can be O(1)-competitive.
Nonclairvoyant Speed Scaling for Flow and Energy
4,222
We consider the Work Function Algorithm for the k-server problem. We show that if the Work Function Algorithm is c-competitive, then it is also strictly (2c)-competitive. As a consequence of [Koutsoupias and Papadimitriou, JACM 1995] this also shows that the Work Function Algorithm is strictly (4k-2)-competitive.
On the Additive Constant of the k-server Work Function Algorithm
4,223
As the World Wide Web is growing rapidly, it is getting increasingly challenging to gather representative information about it. Instead of crawling the web exhaustively one has to resort to other techniques like sampling to determine the properties of the web. A uniform random sample of the web would be useful to determine the percentage of web pages in a specific language, on a topic or in a top level domain. Unfortunately, no approach has been shown to sample the web pages in an unbiased way. Three promising web sampling algorithms are based on random walks. They each have been evaluated individually, but making a comparison on different data sets is not possible. We directly compare these algorithms in this paper. We performed three random walks on the web under the same conditions and analyzed their outcomes in detail. We discuss the strengths and the weaknesses of each algorithm and propose improvements based on experimental results.
A Comparison of Techniques for Sampling Web Pages
4,224
This paper gives a brief overview of computation models for data stream processing, and it introduces a new model for multi-pass processing of multiple streams, the so-called mp2s-automata. Two algorithms for solving the set disjointness problem wi th these automata are presented. The main technical contribution of this paper is the proof of a lower bound on the size of memory and the number of heads that are required for solvin g the set disjointness problem with mp2s-automata.
Lower Bounds for Multi-Pass Processing of Multiple Data Streams
4,225
We consider the multivariate interlace polynomial introduced by Courcelle (2008), which generalizes several interlace polynomials defined by Arratia, Bollobas, and Sorkin (2004) and by Aigner and van der Holst (2004). We present an algorithm to evaluate the multivariate interlace polynomial of a graph with n vertices given a tree decomposition of the graph of width k. The best previously known result (Courcelle 2008) employs a general logical framework and leads to an algorithm with running time f(k)*n, where f(k) is doubly exponential in k. Analyzing the GF(2)-rank of adjacency matrices in the context of tree decompositions, we give a faster and more direct algorithm. Our algorithm uses 2^{3k^2+O(k)}*n arithmetic operations and can be efficiently implemented in parallel.
Fast Evaluation of Interlace Polynomials on Graphs of Bounded Treewidth
4,226
We consider a robust model proposed by Scarf, 1958, for stochastic optimization when only the marginal probabilities of (binary) random variables are given, and the correlation between the random variables is unknown. In the robust model, the objective is to minimize expected cost against worst possible joint distribution with those marginals. We introduce the concept of correlation gap to compare this model to the stochastic optimization model that ignores correlations and minimizes expected cost under independent Bernoulli distribution. We identify a class of functions, using concepts of summable cost sharing schemes from game theory, for which the correlation gap is well-bounded and the robust model can be approximated closely by the independent distribution model. As a result, we derive efficient approximation factors for many popular cost functions, like submodular functions, facility location, and Steiner tree. As a byproduct, our analysis also yields some new results in the areas of social welfare maximization and existence of Walrasian equilibria, which may be of independent interest.
Correlation Robust Stochastic Optimization
4,227
We consider an online scheduling problem, motivated by the issues present at the joints of networks using ATM and TCP/IP. Namely, IP packets have to broken down to small ATM cells and sent out before their deadlines, but cells corresponding to different packets can be interwoven. More formally, we consider the online scheduling problem with preemptions, where each job j is revealed at release time r_j, has processing time p_j, deadline d_j and weight w_j. A preempted job can be resumed at any time. The goal is to maximize the total weight of all jobs completed on time. Our main result are as follows: we prove that if all jobs have processing time exactly k, the deterministic competitive ratio is between 2.598 and 5, and when the processing times are at most k, the deterministic competitive ratio is Theta(k/log k).
Online Scheduling of Bounded Length Jobs to Maximize Throughput
4,228
We consider the problem of representing, in a compressed format, a bit-vector $S$ of $m$ bits with $n$ 1s, supporting the following operations, where $b \in \{0, 1 \}$: $rank_b(S,i)$ returns the number of occurrences of bit $b$ in the prefix $S[1..i]$; $select_b(S,i)$ returns the position of the $i$th occurrence of bit $b$ in $S$. Such a data structure is called \emph{fully indexable dictionary (FID)} [Raman et al.,2007], and is at least as powerful as predecessor data structures. Our focus is on space-efficient FIDs on the \textsc{ram} model with word size $\Theta(\lg m)$ and constant time for all operations, so that the time cost is independent of the input size. Given the bitstring $S$ to be encoded, having length $m$ and containing $n$ ones, the minimal amount of information that needs to be stored is $B(n,m) = \lceil \log {{m}\choose{n}} \rceil$. The state of the art in building a FID for $S$ is given in [Patrascu,2008] using $B(m,n)+O(m / ((\log m/ t) ^t)) + O(m^{3/4}) $ bits, to support the operations in $O(t)$ time. Here, we propose a parametric data structure exhibiting a time/space trade-off such that, for any real constants $0 < \delta \leq 1/2$, $0 < \eps \leq 1$, and integer $s > 0$, it uses \[ B(n,m) + O(n^{1+\delta} + n (\frac{m}{n^s})^\eps) \] bits and performs all the operations in time $O(s\delta^{-1} + \eps^{-1})$. The improvement is twofold: our redundancy can be lowered parametrically and, fixing $s = O(1)$, we get a constant-time FID whose space is $B(n,m) + O(m^\eps/\poly{n})$ bits, for sufficiently large $m$. This is a significant improvement compared to the previous bounds for the general case.
More Haste, Less Waste: Lowering the Redundancy in Fully Indexable Dictionaries
4,229
Given an undirected graph G=(V,E) and subset of terminals T \subseteq V, the element-connectivity of two terminals u,v \in T is the maximum number of u-v paths that are pairwise disjoint in both edges and non-terminals V \setminus T (the paths need not be disjoint in terminals). Element-connectivity is more general than edge-connectivity and less general than vertex-connectivity. Hind and Oellermann gave a graph reduction step that preserves the global element-connectivity of the graph. We show that this step also preserves local connectivity, that is, all the pairwise element-connectivities of the terminals. We give two applications of this reduction step to connectivity and network design problems: 1. Given a graph G and disjoint terminal sets T_1, T_2, ..., T_m, we seek a maximum number of element-disjoint Steiner forests where each forest connects each T_i. We prove that if each T_i is k-element-connected then there exist \Omega(\frac{k}{\log h \log m}) element-disjoint Steiner forests, where h = |\bigcup_i T_i|. If G is planar (or more generally, has fixed genus), we show that there exist \Omega(k) Steiner forests. Our proofs are constructive, giving poly-time algorithms to find these forests; these are the first non-trivial algorithms for packing element-disjoint Steiner Forests. 2. We give a very short and intuitive proof of a spider-decomposition theorem of Chuzhoy and Khanna in the context of the single-sink k-vertex-connectivity problem; this yields a simple and alternative analysis of an O(k \log n) approximation. Our results highlight the effectiveness of the element-connectivity reduction step; we believe it will find more applications in the future.
A Graph Reduction Step Preserving Element-Connectivity and Applications
4,230
This paper presents different methods for solving parallel machine scheduling problems with precedence constraints and setup times between the jobs. Limited discrepancy search methods mixed with local search principles, dominance conditions and specific lower bounds are proposed. The proposed methods are evaluated on a set of randomly generated instances and compared with previous results from the literature and those obtained with an efficient commercial solver. We conclude that our propositions are quite competitive and our results even outperform other approaches in most cases.
Parallel machine scheduling with precedence constraints and setup times
4,231
The heap is a basic data structure used in a wide variety of applications, including shortest path and minimum spanning tree algorithms. In this paper we explore the design space of comparison-based, amortized-efficient heap implementations. From a consideration of dynamic single-elimination tournaments, we obtain the binomial queue, a classical heap implementation, in a simple and natural way. We give four equivalent ways of representing heaps arising from tournaments, and we obtain two new variants of binomial queues, a one-tree version and a one-pass version. We extend the one-pass version to support key decrease operations, obtaining the {\em rank-pairing heap}, or {\em rp-heap}. Rank-pairing heaps combine the performance guarantees of Fibonacci heaps with simplicity approaching that of pairing heaps. Like pairing heaps, rank-pairing heaps consist of trees of arbitrary structure, but these trees are combined by rank, not by list position, and rank changes, but not structural changes, cascade during key decrease operations.
Heaps Simplified
4,232
We present in this short note a polynomial graph extension procedure that can be used to improve any graph isomorphism algorithm. This construction propagates new constraints from the isomorphism constraints of the input graphs (denoted by $G(V,E)$ and $G'(V',E')$). Thus, information from the edge structures of $G$ and $G'$ is "hashed" into the weighted edges of the extended graphs. A bijective mapping is an isomorphism of the initial graphs if and only if it is an isomorphism of the extended graphs. As such, the construction enables the identification of pair of vertices $i\in V$ and $i'\in V'$ that can not be mapped by any isomorphism $h^*:V \to V'$ (e.g. if the extended edges of $i$ and $i'$ are different). A forbidding matrix $F$, that encodes all pairs of incompatible mappings $(i,i')$, is constructed in order to be used by a different algorithm. Moreover, tests on numerous graph classes show that the matrix $F$ might leave only one compatible element for each $i \in V$.
A polynomial graph extension procedure for improving graph isomorphism algorithms
4,233
Rotation distance between trees measures the number of simple operations it takes to transform one tree into another. There are no known polynomial-time algorithms for computing rotation distance. In the case of ordered rooted trees, we show that the rotation distance between two ordered trees is fixed-parameter tractable, in the parameter, k, the rotation distance. The proof relies on the kernalization of the initial trees to trees with size bounded by 7k.
Rotation Distance is Fixed-Parameter Tractable
4,234
Rotation distance between rooted binary trees measures the number of simple operations it takes to transform one tree into another. There are no known polynomial-time algorithms for computing rotation distance. We give an efficient, linear-time approximation algorithm, which estimates the rotation distance, within a provable factor of 2, between ordered rooted binary trees. .
A Linear-Time Approximation Algorithm for Rotation Distance
4,235
In this note we improve a recent result by Arora, Khot, Kolla, Steurer, Tulsiani, and Vishnoi on solving the Unique Games problem on expanders. Given a $(1-\varepsilon)$-satisfiable instance of Unique Games with the constraint graph $G$, our algorithm finds an assignment satisfying at least a $1- C \varepsilon/h_G$ fraction of all constraints if $\varepsilon < c \lambda_G$ where $h_G$ is the edge expansion of $G$, $\lambda_G$ is the second smallest eigenvalue of the Laplacian of $G$, and $C$ and $c$ are some absolute constants.
How to Play Unique Games on Expanders
4,236
Cuckoo hashing is a highly practical dynamic dictionary: it provides amortized constant insertion time, worst case constant deletion time and lookup time, and good memory utilization. However, with a noticeable probability during the insertion of n elements some insertion requires \Omega(log n) time. Whereas such an amortized guarantee may be suitable for some applications, in other applications (such as high-performance routing) this is highly undesirable. Recently, Kirsch and Mitzenmacher (Allerton '07) proposed a de-amortization of cuckoo hashing using various queueing techniques that preserve its attractive properties. Kirsch and Mitzenmacher demonstrated a significant improvement to the worst case performance of cuckoo hashing via experimental results, but they left open the problem of constructing a scheme with provable properties. In this work we follow Kirsch and Mitzenmacher and present a de-amortization of cuckoo hashing that provably guarantees constant worst case operations. Specifically, for any sequence of polynomially many operations, with overwhelming probability over the randomness of the initialization phase, each operation is performed in constant time. Our theoretical analysis and experimental results indicate that the scheme is highly efficient, and provides a practical alternative to the only other known dynamic dictionary with such worst case guarantees, due to Dietzfelbinger and Meyer auf der Heide (ICALP '90).
De-amortized Cuckoo Hashing: Provable Worst-Case Performance and Experimental Results
4,237
An out-tree $T$ is an oriented tree with only one vertex of in-degree zero. A vertex $x$ of $T$ is internal if its out-degree is positive. We design randomized and deterministic algorithms for deciding whether an input digraph contains a given out-tree with $k$ vertices. The algorithms are of runtime $O^*(5.704^k)$ and $O^*(5.704^{k(1+o(1))})$, respectively. We apply the deterministic algorithm to obtain a deterministic algorithm of runtime $O^*(c^k)$, where $c$ is a constant, for deciding whether an input digraph contains a spanning out-tree with at least $k$ internal vertices. This answers in affirmative a question of Gutin, Razgon and Kim (Proc. AAIM'08).
Algorithm for Finding $k$-Vertex Out-trees and its Application to $k$-Internal Out-branching Problem
4,238
Improving the structure and analysis in \cite{elm0}, we give a variation of the pairing heaps that has amortized zero cost per meld (compared to an $O(\log \log{n})$ in \cite{elm0}) and the same amortized bounds for all other operations. More precisely, the new pairing heap requires: no cost per meld, O(1) per find-min and insert, $O(\log{n})$ per delete-min, and $O(\log\log{n})$ per decrease-key. These bounds are the best known for any self-adjusting heap, and match the lower bound proved by Fredman for a family of such heaps. Moreover, the changes we have done make our structure even simpler than that in \cite{elm0}.
Pairing Heaps with Costless Meld
4,239
Much research has been devoted to optimizing algorithms of the Lempel-Ziv (LZ) 77 family, both in terms of speed and memory requirements. Binary search trees and suffix trees (ST) are data structures that have been often used for this purpose, as they allow fast searches at the expense of memory usage. In recent years, there has been interest on suffix arrays (SA), due to their simplicity and low memory requirements. One key issue is that an SA can solve the sub-string problem almost as efficiently as an ST, using less memory. This paper proposes two new SA-based algorithms for LZ encoding, which require no modifications on the decoder side. Experimental results on standard benchmarks show that our algorithms, though not faster, use 3 to 5 times less memory than the ST counterparts. Another important feature of our SA-based algorithms is that the amount of memory is independent of the text to search, thus the memory that has to be allocated can be defined a priori. These features of low and predictable memory requirements are of the utmost importance in several scenarios, such as embedded systems, where memory is at a premium and speed is not critical. Finally, we point out that the new algorithms are general, in the sense that they are adequate for applications other than LZ compression, such as text retrieval and forward/backward sub-string search.
On the Use of Suffix Arrays for Memory-Efficient Lempel-Ziv Data Compression
4,240
We show that the k-Dominating Set problem is fixed parameter tractable (FPT) and has a polynomial kernel for any class of graphs that exclude K_{i,j} as a subgraph, for any fixed i, j >= 1. This strictly includes every class of graphs for which this problem has been previously shown to have FPT algorithms and/or polynomial kernels. In particular, our result implies that the problem restricted to bounded- degenerate graphs has a polynomial kernel, solving an open problem posed by Alon and Gutner.
Solving Dominating Set in Larger Classes of Graphs: FPT Algorithms and Polynomial Kernels
4,241
We show how to use a balanced wavelet tree as a data structure that stores a list of numbers and supports efficient {\em range quantile queries}. A range quantile query takes a rank and the endpoints of a sublist and returns the number with that rank in that sublist. For example, if the rank is half the sublist's length, then the query returns the sublist's median. We also show how these queries can be used to support space-efficient {\em coloured range reporting} and {\em document listing}.
Range Quantile Queries: Another Virtue of Wavelet Trees
4,242
We study the problem of estimating the Earth Mover's Distance (EMD) between probability distributions when given access only to samples. We give closeness testers and additive-error estimators over domains in $[0, \Delta]^d$, with sample complexities independent of domain size - permitting the testability even of continuous distributions over infinite domains. Instead, our algorithms depend on other parameters, such as the diameter of the domain space, which may be significantly smaller. We also prove lower bounds showing the dependencies on these parameters to be essentially optimal. Additionally, we consider whether natural classes of distributions exist for which there are algorithms with better dependence on the dimension, and show that for highly clusterable data, this is indeed the case. Lastly, we consider a variant of the EMD, defined over tree metrics instead of the usual L1 metric, and give optimal algorithms.
Sublinear Time Algorithms for Earth Mover's Distance
4,243
In many applications we are required to increase the deployment of a distributed monitoring system on an evolving network. In this paper we present a new method for finding candidate locations for additional deployment in the network. This method is based on the Group Betweenness Centrality (GBC) measure that is used to estimate the influence of a group of nodes over the information flow in the network. The new method assists in finding the location of k additional monitors in the evolving network, such that the portion of additional traffic covered is at least (1-1/e) of the optimal.
Incremental Deployment of Network Monitors Based on Group Betweenness Centrality
4,244
We consider the online list s-batch problem, where all the jobs have processing time 1 and we seek to minimize the sum of the completion times of the jobs. We give a Java program which is used to verify that the competitiveness of this problem is 619/583.
A Program to Determine the Exact Competitive Ratio of List s-Batching with Unit Jobs
4,245
In a bounded max-coloring of a vertex/edge weighted graph, each color class is of cardinality at most $b$ and of weight equal to the weight of the heaviest vertex/edge in this class. The bounded max-vertex/edge-coloring problems ask for such a coloring minimizing the sum of all color classes' weights. In this paper we present complexity results and approximation algorithms for those problems on general graphs, bipartite graphs and trees. We first show that both problems are polynomial for trees, when the number of colors is fixed, and $H_b$ approximable for general graphs, when the bound $b$ is fixed. For the bounded max-vertex-coloring problem, we show a 17/11-approximation algorithm for bipartite graphs, a PTAS for trees as well as for bipartite graphs when $b$ is fixed. For unit weights, we show that the known 4/3 lower bound for bipartite graphs is tight by providing a simple 4/3 approximation algorithm. For the bounded max-edge-coloring problem, we prove approximation factors of $3-2/\sqrt{2b}$, for general graphs, $\min\{e, 3-2/\sqrt{b}\}$, for bipartite graphs, and 2, for trees. Furthermore, we show that this problem is NP-complete even for trees. This is the first complexity result for max-coloring problems on trees.
Bounded Max-Colorings of Graphs
4,246
We give the first L_1-sketching algorithm for integer vectors which produces nearly optimal sized sketches in nearly linear time. This answers the first open problem in the list of open problems from the 2006 IITK Workshop on Algorithms for Data Streams. Specifically, suppose Alice receives a vector x in {-M,...,M}^n and Bob receives y in {-M,...,M}^n, and the two parties share randomness. Each party must output a short sketch of their vector such that a third party can later quickly recover a (1 +/- eps)-approximation to ||x-y||_1 with 2/3 probability given only the sketches. We give a sketching algorithm which produces O(eps^{-2}log(1/eps)log(nM))-bit sketches in O(n*log^2(nM)) time, independent of eps. The previous best known sketching algorithm for L_1 is due to [Feigenbaum et al., SICOMP 2002], which achieved the optimal sketch length of O(eps^{-2}log(nM)) bits but had a running time of O(n*log(nM)/eps^2). Notice that our running time is near-linear for every eps, whereas for sufficiently small values of eps, the running time of the previous algorithm can be as large as quadratic. Like their algorithm, our sketching procedure also yields a small-space, one-pass streaming algorithm which works even if the entries of x,y are given in arbitrary order.
A Near-Optimal Algorithm for L1-Difference
4,247
Given an embedded planar acyclic digraph G, we define the problem of acyclic hamiltonian path completion with crossing minimization (Acyclic-HPCCM) to be the problem of determining a hamiltonian path completion set of edges such that, when these edges are embedded on G, they create the smallest possible number of edge crossings and turn G to a hamiltonian acyclic digraph. Our results include: 1. We provide a characterization under which a planar st-digraph G is hamiltonian. 2. For an outerplanar st-digraph G, we define the st-polygon decomposition of G and, based on its properties, we develop a linear-time algorithm that solves the Acyclic-HPCCM problem. 3. For the class of planar st-digraphs, we establish an equivalence between the Acyclic-HPCCM problem and the problem of determining an upward 2-page topological book embedding with minimum number of spine crossings. We infer (based on this equivalence) for the class of outerplanar st-digraphs an upward topological 2-page book embedding with minimum number of spine crossings. To the best of our knowledge, it is the first time that edge-crossing minimization is studied in conjunction with the acyclic hamiltonian completion problem and the first time that an optimal algorithm with respect to spine crossing minimization is presented for upward topological book embeddings.
Crossing-Optimal Acyclic HP-Completion for Outerplanar st-Digraphs
4,248
Let P be a set of n points in the Euclidean plane and let O be the origin point in the plane. In the k-tour cover problem (called frequently the capacitated vehicle routing problem), the goal is to minimize the total length of tours that cover all points in P, such that each tour starts and ends in O and covers at most k points from P. The k-tour cover problem is known to be NP-hard. It is also known to admit constant factor approximation algorithms for all values of k and even a polynomial-time approximation scheme (PTAS) for small values of k, i.e., k=O(log n / log log n). We significantly enlarge the set of values of k for which a PTAS is provable. We present a new PTAS for all values of k <= 2^{log^{\delta}n}, where \delta = \delta(\epsilon). The main technical result proved in the paper is a novel reduction of the k-tour cover problem with a set of n points to a small set of instances of the problem, each with O((k/\epsilon)^O(1)) points.
PTAS for k-tour cover problem on the plane for moderately large values of k
4,249
The diameter of a graph is among its most basic parameters. Since a few years, it moreover became a key issue to compute it for massive graphs in the context of complex network analysis. However, known algorithms, including the ones producing approximate values, have too high a time and/or space complexity to be used in such cases. We propose here a new approach relying on very simple and fast algorithms that compute (upper and lower) bounds for the diameter. We show empirically that, on various real-world cases representative of complex networks studied in the literature, the obtained bounds are very tight (and even equal in some cases). This leads to rigorous and very accurate estimations of the actual diameter in cases which were previously untractable in practice.
Fast Computation of Empirically Tight Bounds for the Diameter of Massive Graphs
4,250
Memory becomes a limiting factor in contemporary applications, such as analyses of the Webgraph and molecular sequences, when many objects need to be counted simultaneously. Robert Morris [Communications of the ACM, 21:840--842, 1978] proposed a probabilistic technique for approximate counting that is extremely space-efficient. The basic idea is to increment a counter containing the value $X$ with probability $2^{-X}$. As a result, the counter contains an approximation of $\lg n$ after $n$ probabilistic updates stored in $\lg\lg n$ bits. Here we revisit the original idea of Morris, and introduce a binary floating-point counter that uses a $d$-bit significand in conjunction with a binary exponent. The counter yields a simple formula for an unbiased estimation of $n$ with a standard deviation of about $0.6\cdot n2^{-d/2}$, and uses $d+\lg\lg n$ bits. We analyze the floating-point counter's performance in a general framework that applies to any probabilistic counter, and derive practical formulas to assess its accuracy.
Approximate counting with a floating-point counter
4,251
We describe a data structure that maintains the number of triangles in a dynamic undirected graph, subject to insertions and deletions of edges and of degree-zero vertices. More generally it can be used to maintain the number of copies of each possible three-vertex subgraph in time O(h) per update, where h is the h-index of the graph, the maximum number such that the graph contains $h$ vertices of degree at least h. We also show how to maintain the h-index itself, and a collection of h high-degree vertices in the graph, in constant time per update. Our data structure has applications in social network analysis using the exponential random graph model (ERGM); its bound of O(h) time per edge is never worse than the Theta(sqrt m) time per edge necessary to list all triangles in a static graph, and is strictly better for graphs obeying a power law degree distribution. In order to better understand the behavior of the h-index statistic and its implications for the performance of our algorithms, we also study the behavior of the h-index on a set of 136 real-world networks.
The h-Index of a Graph and its Application to Dynamic Subgraph Statistics
4,252
We study the problem of abstracting a table of data about individuals so that no selection query can identify fewer than k individuals. We show that it is impossible to achieve arbitrarily good polynomial-time approximations for a number of natural variations of the generalization technique, unless P = NP, even when the table has only a single quasi-identifying attribute that represents a geographic or unordered attribute: Zip-codes: nodes of a planar graph generalized into connected subgraphs GPS coordinates: points in R2 generalized into non-overlapping rectangles Unordered data: text labels that can be grouped arbitrarily. In addition to impossibility results, we provide approximation algorithms for these difficult single-attribute generalization problems, which, of course, apply to multiple-attribute instances with one that is quasi-identifying. We show theoretically and experimentally that our approximation algorithms can come reasonably close to optimal solutions. Incidentally, the generalization problem for unordered data can be viewed as a novel type of bin packing problem--min-max bin covering--which may be of independent interest.
On the Approximability of Geometric and Geographic Generalization and the Min-Max Bin Covering Problem
4,253
We provide a smoothed analysis of Hoare's find algorithm and we revisit the smoothed analysis of quicksort. Hoare's find algorithm - often called quickselect - is an easy-to-implement algorithm for finding the k-th smallest element of a sequence. While the worst-case number of comparisons that Hoare's find needs is quadratic, the average-case number is linear. We analyze what happens between these two extremes by providing a smoothed analysis of the algorithm in terms of two different perturbation models: additive noise and partial permutations. Moreover, we provide lower bounds for the smoothed number of comparisons of quicksort and Hoare's find for the median-of-three pivot rule, which usually yields faster algorithms than always selecting the first element: The pivot is the median of the first, middle, and last element of the sequence. We show that median-of-three does not yield a significant improvement over the classic rule: the lower bounds for the classic rule carry over to median-of-three.
On Smoothed Analysis of Quicksort and Hoare's Find
4,254
Many data dissemination and publish-subscribe systems that guarantee the privacy and authenticity of the participants rely on symmetric key cryptography. An important problem in such a system is to maintain the shared group key as the group membership changes. We consider the problem of determining a key hierarchy that minimizes the average communication cost of an update, given update frequencies of the group members and an edge-weighted undirected graph that captures routing costs. We first present a polynomial-time approximation scheme for minimizing the average number of multicast messages needed for an update. We next show that when routing costs are considered, the problem is NP-hard even when the underlying routing network is a tree network or even when every group member has the same update frequency. Our main result is a polynomial time constant-factor approximation algorithm for the general case where the routing network is an arbitrary weighted graph and group members have nonuniform update frequencies.
Approximation Algorithms for Key Management in Secure Multicast
4,255
We present an O(n^3 log^2 n)-time algorithm for the following problem: given a finite metric space X, create a star-topology network with the points of X as its leaves, such that the distances in the star are at least as large as in X, with minimum dilation. As part of our algorithm, we solve in the same time bound the parametric negative cycle detection problem: given a directed graph with edge weights that are increasing linear functions of a parameter lambda, find the smallest value of lambda such that the graph contains no negative-weight cycles.
Optimal Embedding Into Star Metrics
4,256
We propose new succinct representations of ordinal trees, which have been studied extensively. It is known that any $n$-node static tree can be represented in $2n + o(n)$ bits and a number of operations on the tree can be supported in constant time under the word-RAM model. However the data structures are complicated and difficult to dynamize. We propose a simple and flexible data structure, called the range min-max tree, that reduces the large number of relevant tree operations considered in the literature to a few primitives that are carried out in constant time on sufficiently small trees. The result is extended to trees of arbitrary size, achieving $2n + O(n /\polylog(n))$ bits of space. The redundancy is significantly lower than any previous proposal. Our data structure builds on the range min-max tree to achieve $2n+O(n/\log n)$ bits of space and $O(\log n)$ time for all the operations. We also propose an improved data structure using $2n+O(n\log\log n/\log n)$ bits and improving the time to the optimal $O(\log n/\log \log n)$ for most operations. Furthermore, we support sophisticated operations that allow attaching and detaching whole subtrees, in time $\Order(\log^{1+\epsilon} n / \log\log n)$. Our techniques are of independent interest. One allows representing dynamic bitmaps and sequences supporting rank/select and indels, within zero-order entropy bounds and optimal time $O(\log n / \log\log n)$ for all operations on bitmaps and polylog-sized alphabets, and $O(\log n \log \sigma / (\log\log n)^2)$ on larger alphabet sizes $\sigma$. This improves upon the best existing bounds for entropy-bounded storage of dynamic sequences, compressed full-text self-indexes, and compressed-space construction of the Burrows-Wheeler transform.
Fully-Functional Static and Dynamic Succinct Trees
4,257
In this paper, it is demonstrated that the DNA-based algorithm [Ho et al. 2005] for solving an instance of the clique problem to any a graph G = (V, E) with n vertices and p edges and its complementary graph G1 = (V, E1) with n vertices and m = (((n*(n-1))/2)-p) edges can be implemented by Hadamard gates, NOT gates, CNOT gates, CCNOT gates, Grover's operators, and quantum measurements on a quantum computer. It is also demonstrated that if Grovers algorithm is employed to accomplish the readout step in the DNA-based algorithm, the quantum implementation of the DNA-based algorithm is equivalent to the oracle work (in the language of Grover's algorithm), that is, the target state labeling preceding Grover,s searching steps. It is shown that one oracle work can be completed with O((2 * n) * (n + 1) * (n + 2) / 3) NOT gates, one CNOT gate and O((4 * m) + (((2 * n) * (n + 1) * (n + 14)) / 6)) CCNOT gates. This is to say that for the quantum implementation of the DNA-based algorithm [Ho et al. 2005] a faster labeling of the target state is attained, which also implies a speedy solution to an instance of the clique problem.
Quantum Algorithms of Bio-molecular Solutions for the Clique Problem on a Quantum Computer
4,258
In this work we study the validity of the so-called curse of dimensionality for indexing of databases for similarity search. We perform an asymptotic analysis, with a test model based on a sequence of metric spaces $(\Omega_d)$ from which we pick datasets $X_d$ in an i.i.d. fashion. We call the subscript $d$ the dimension of the space $\Omega_d$ (e.g. for $\mathbb{R}^d$ the dimension is just the usual one) and we allow the size of the dataset $n=n_d$ to be such that $d$ is superlogarithmic but subpolynomial in $n$. We study the asymptotic performance of pivot-based indexing schemes where the number of pivots is $o(n/d)$. We pick the relatively simple cost model of similarity search where we count each distance calculation as a single computation and disregard the rest. We demonstrate that if the spaces $\Omega_d$ exhibit the (fairly common) concentration of measure phenomenon the performance of similarity search using such indexes is asymptotically linear in $n$. That is for large enough $d$ the difference between using such an index and performing a search without an index at all is negligeable. Thus we confirm the curse of dimensionality in this setting.
Curse of Dimensionality in the Application of Pivot-based Indexes to the Similarity Search Problem
4,259
It is well-known that, given a probability distribution over $n$ characters, in the worst case it takes (\Theta (n \log n)) bits to store a prefix code with minimum expected codeword length. However, in this paper we first show that, for any $0<\epsilon<1/2$ with (1 / \epsilon = \Oh{\polylog{n}}), it takes $\Oh{n \log \log (1 / \epsilon)}$ bits to store a prefix code with expected codeword length within $\epsilon$ of the minimum. We then show that, for any constant (c > 1), it takes $\Oh{n^{1 / c} \log n}$ bits to store a prefix code with expected codeword length at most $c$ times the minimum. In both cases, our data structures allow us to encode and decode any character in $\Oh{1}$ time.
Fast and Compact Prefix Codes
4,260
In 2005 Li et al. gave a phi-competitive deterministic online algorithm for scheduling of packets with agreeable deadlines with a very interesting analysis. This is known to be optimal due to a lower bound by Hajek. We claim that the algorithm by Li et al. can be slightly simplified, while retaining its competitive ratio. Then we introduce randomness to the modified algorithm and argue that the competitive ratio against oblivious adversary is at most 4/3. Note that this still leaves a gap between the best known lower bound of 5/4 by Chin et al. for randomised algorithms against oblivious adversary.
A 4/3-competitive randomized algorithm for online scheduling of packets with agreeable deadlines
4,261
Constant-factor, polynomial-time approximation algorithms are presented for two variations of the traveling salesman problem with time windows. In the first variation, the traveling repairman problem, the goal is to find a tour that visits the maximum possible number of locations during their time windows. In the second variation, the speeding deliveryman problem, the goal is to find a tour that uses the minimum possible speedup to visit all locations during their time windows. For both variations, the time windows are of unit length, and the distance metric is based on a weighted, undirected graph. Algorithms with improved approximation ratios are given for the case when the input is defined on a tree rather than a general graph. The algorithms are also extended to handle time windows whose lengths fall in any bounded range.
Approximation Algorithms for the Traveling Repairman and Speeding Deliveryman Problems
4,262
he segment minimization problem consists of finding the smallest set of integer matrices that sum to a given intensity matrix, such that each summand has only one non-zero value, and the non-zeroes in each row are consecutive. This has direct applications in intensity-modulated radiation therapy, an effective form of cancer treatment. We develop three approximation algorithms for matrices with arbitrarily many rows. Our first two algorithms improve the approximation factor from the previous best of $1+\log_2 h $ to (roughly) $3/2 \cdot (1+\log_3 h)$ and $11/6\cdot(1+\log_4{h})$, respectively, where $h$ is the largest entry in the intensity matrix. We illustrate the limitations of the specific approach used to obtain these two algorithms by proving a lower bound of $\frac{(2b-2)}{b}\cdot\log_b{h} + \frac{1}{b}$ on the approximation guarantee. Our third algorithm improves the approximation factor from $2 \cdot (\log D+1)$ to $24/13 \cdot (\log D+1)$, where $D$ is (roughly) the largest difference between consecutive elements of a row of the intensity matrix. Finally, experimentation with these algorithms shows that they perform well with respect to the optimum and outperform other approximation algorithms on 77% of the 122 test cases we consider, which include both real world and synthetic data.
Improved Approximation Algorithms for Segment Minimization in Intensity Modulated Radiation Therapy
4,263
We consider a somehow peculiar Token/Bucket problem which at first sight looks confusing and difficult to solve. The winning approach to solve the problem consists in going back to the simple and traditional methods to solve computer science problems like the one taught to us by Knuth. Somehow the main trick is to be able to specify clearly what needs to be achieved, and then the solution, even if complex, appears almost by itself.
Rivisiting Token/Bucket Algorithms in New Applications
4,264
We offer a theoretical validation of the curse of dimensionality in the pivot-based indexing of datasets for similarity search, by proving, in the framework of statistical learning, that in high dimensions no pivot-based indexing scheme can essentially outperform the linear scan. A study of the asymptotic performance of pivot-based indexing schemes is performed on a sequence of datasets modeled as samples $X_d$ picked in i.i.d. fashion from metric spaces $\Omega_d$. We allow the size of the dataset $n=n_d$ to be such that $d$, the ``dimension'', is superlogarithmic but subpolynomial in $n$. The number of pivots is allowed to grow as $o(n/d)$. We pick the least restrictive cost model of similarity search where we count each distance calculation as a single computation and disregard the rest. We demonstrate that if the intrinsic dimension of the spaces $\Omega_d$ in the sense of concentration of measure phenomenon is $O(d)$, then the performance of similarity search pivot-based indexes is asymptotically linear in $n$.
Curse of Dimensionality in Pivot-based Indexes
4,265
The Multidimensional Assignment Problem (MAP or s-AP in the case of s dimensions) is an extension of the well-known assignment problem. The most studied case of MAP is 3-AP, though the problems with larger values of s have also a number of applications. In this paper we propose a memetic algorithm for MAP that is a combination of a genetic algorithm with a local search procedure. The main contribution of the paper is an idea of dynamically adjusted generation size, that yields an outstanding flexibility of the algorithm to perform well for both small and large fixed running times. The results of computational experiments for several instance families show that the proposed algorithm produces solutions of very high quality in a reasonable time and outperforms the state-of-the art 3-AP memetic algorithm.
A Memetic Algorithm for the Multidimensional Assignment Problem
4,266
In this paper we consider several constrained activity scheduling problems in the time and space domains, like finding activity orderings which optimize the values of several objective functions (time scheduling) or finding optimal locations where certain types of activities will take place (space scheduling). We present novel, efficient algorithmic solutions for all the considered problems, based on the dynamic programming and greedy techniques. In each case we compute exact, optimal solutions.
Efficient Algorithms for Several Constrained Activity Scheduling Problems in the Time and Space Domains
4,267
In classical scheduling problems, we are given jobs and machines, and have to schedule all the jobs to minimize some objective function. What if each job has a specified profit, and we are no longer required to process all jobs -- we can schedule any subset of jobs whose total profit is at least a (hard) target profit requirement, while still approximately minimizing the objective function? We refer to this class of problems as scheduling with outliers. This model was initiated by Charikar and Khuller (SODA'06) on the minimum max-response time in broadcast scheduling. We consider three other well-studied scheduling objectives: the generalized assignment problem, average weighted completion time, and average flow time, and provide LP-based approximation algorithms for them. For the minimum average flow time problem on identical machines, we give a logarithmic approximation algorithm for the case of unit profits based on rounding an LP relaxation; we also show a matching integrality gap. For the average weighted completion time problem on unrelated machines, we give a constant factor approximation. The algorithm is based on randomized rounding of the time-indexed LP relaxation strengthened by the knapsack-cover inequalities. For the generalized assignment problem with outliers, we give a simple reduction to GAP without outliers to obtain an algorithm whose makespan is within 3 times the optimum makespan, and whose cost is at most (1 + \epsilon) times the optimal cost.
Scheduling with Outliers
4,268
We consider online algorithms for pull-based broadcast scheduling. In this setting there are n pages of information at a server and requests for pages arrive online. When the server serves (broadcasts) a page p, all outstanding requests for that page are satisfied. We study two related metrics, namely maximum response time (waiting time) and maximum delay-factor and their weighted versions. We obtain the following results in the worst-case online competitive model. - We show that FIFO (first-in first-out) is 2-competitive even when the page sizes are different. Previously this was known only for unit-sized pages [10] via a delicate argument. Our proof differs from [10] and is perhaps more intuitive. - We give an online algorithm for maximum delay-factor that is O(1/eps^2)-competitive with (1+\eps)-speed for unit-sized pages and with (2+\eps)-speed for different sized pages. This improves on the algorithm in [12] which required (2+\eps)-speed and (4+\eps)-speed respectively. In addition we show that the algorithm and analysis can be extended to obtain the same results for maximum weighted response time and delay factor. - We show that a natural greedy algorithm modeled after LWF (Longest-Wait-First) is not O(1)-competitive for maximum delay factor with any constant speed even in the setting of standard scheduling with unit-sized jobs. This complements our upper bound and demonstrates the importance of the tradeoff made in our algorithm.
Minimizing Maximum Response Time and Delay Factor in Broadcast Scheduling
4,269
We consider online algorithms for broadcast scheduling. In the pull-based broadcast model there are $n$ unit-sized pages of information at a server and requests arrive online for pages. When the server transmits a page $p$, all outstanding requests for that page are satisfied. The longest-wait-first} (LWF) algorithm is a natural algorithm that has been shown to have good empirical performance. In this paper we make two main contributions to the analysis of LWF and broadcast scheduling. \begin{itemize} \item We give an intuitive and easy to understand analysis of LWF which shows that it is $O(1/\eps^2)$-competitive for average flow-time with $(4+\eps)$ speed. Using a more involved analysis, we show that LWF is $O(1/\eps^3)$-competitive for average flow-time with $(3.4+\epsilon)$ speed. \item We show that a natural extension of LWF is O(1)-speed O(1)-competitive for more general objective functions such as average delay-factor and $L_k$ norms of delay-factor (for fixed $k$). \end{itemize}
Longest Wait First for Broadcast Scheduling
4,270
Logconcave functions represent the current frontier of efficient algorithms for sampling, optimization and integration in R^n. Efficient sampling algorithms to sample according to a probability density (to which the other two problems can be reduced) relies on good isoperimetry which is known to hold for arbitrary logconcave densities. In this paper, we extend this frontier in two ways: first, we characterize convexity-like conditions that imply good isoperimetry, i.e., what condition on function values along every line guarantees good isoperimetry? The answer turns out to be the set of (1/(n-1))-harmonic concave functions in R^n; we also prove that this is the best possible characterization along every line, of functions having good isoperimetry. Next, we give the first efficient algorithm for sampling according to such functions with complexity depending on a smoothness parameter. Further, noting that the multivariate Cauchy density is an important distribution in this class, we exploit certain properties of the Cauchy density to give an efficient sampling algorithm based on random walks with a mixing time that matches the current best bounds known for sampling logconcave functions.
The Limit of Convexity Based Isoperimetry: Sampling Harmonic-Concave Functions
4,271
We consider a stochastic perturbation of a FitzHugh-Nagumo system. We show that it is possible to generate oscillations for values of parameters which do not allow oscillations for the deterministic system. We also study the appearance of a new equilibrium point and new bifurcation parameters due to the noisy component.
Oscillations and Random Perturbations of a FitzHugh-Nagumo System
4,272
The multidimensional assignment problem (MAP) (abbreviated s-AP in the case of s dimensions) is an extension of the well-known assignment problem. The most studied case of MAP is 3-AP, though the problems with larger values of s have also a number of applications. In this paper we consider four fast construction heuristics for MAP. One of the heuristics is new. A modification of the heuristics is proposed to optimize the access to slow computer memory. The results of computational experiments for several instance families are provided and discussed.
Empirical evaluation of construction heuristics for the multidimensional assignment problem
4,273
In the Scheduling Machines with Capacity Constraints problem, we are given k identical machines, each of which can process at most m_i jobs. M jobs are also given, where job j has a non-negative processing time length t_j >= 0. The task is to find a schedule such that the makespan is minimized and the capacity constraints are met. In this paper, we present a 3-approximation algorithm using an extension of Iterative Rounding Method introduced by Jain. To the best of the authors' knowledge, this is the first attempt to apply Iterative Rounding Method to scheduling problem with capacity constraints.
Approximating Scheduling Machines with Capacity Constraints
4,274
We consider two well-known natural variants of bin packing, and show that these packing problems admit asymptotic fully polynomial time approximation schemes (AFPTAS). In bin packing problems, a set of one-dimensional items of size at most 1 is to be assigned (packed) to subsets of sum at most 1 (bins). It has been known for a while that the most basic problem admits an AFPTAS. In this paper, we develop methods that allow to extend this result to other variants of bin packing. Specifically, the problems which we study in this paper, for which we design asymptotic fully polynomial time approximation schemes, are the following. The first problem is "Bin packing with cardinality constraints", where a parameter k is given, such that a bin may contain up to k items. The goal is to minimize the number of bins used. The second problem is "Bin packing with rejection", where every item has a rejection penalty associated with it. An item needs to be either packed to a bin or rejected, and the goal is to minimize the number of used bins plus the total rejection penalty of unpacked items. This resolves the complexity of two important variants of the bin packing problem. Our approximation schemes use a novel method for packing the small items. This new method is the core of the improved running times of our schemes over the running times of the previous results, which are only asymptotic polynomial time approximation schemes (APTAS).
AFPTAS results for common variants of bin packing: A new method to handle the small items
4,275
Following the work of Anily et al., we consider a variant of bin packing, called "bin packing with general cost structures" (GCBP) and design an asymptotic fully polynomial time approximation scheme (AFPTAS) for this problem. In the classic bin packing problem, a set of one-dimensional items is to be assigned to subsets of total size at most 1, that is, to be packed into unit sized bins. However, in GCBP, the cost of a bin is not 1 as in classic bin packing, but it is a non-decreasing and concave function of the number of items packed in it, where the cost of an empty bin is zero. The construction of the AFPTAS requires novel techniques for dealing with small items, which are developed in this work. In addition, we develop a fast approximation algorithm which acts identically for all non-decreasing and concave functions, and has an asymptotic approximation ratio of 1.5 for all functions simultaneously.
Bin packing with general cost structures
4,276
We provide geometrical interpretation of the Master Theorem to solve divide-and-conquer recurrences. We show how different cases of the recurrences correspond to different kinds of fractal images. Fractal dimension and Hausdorff measure are shown to be closely related to the solution of such recurrences.
Geometrical Interpretation of the Master Theorem for Divide-and-conquer Recurrences
4,277
This paper presents a new algorithm based on integrating Genetic Algorithms and Tabu Search methods to solve the Job Shop Scheduling problem. The idea of the proposed algorithm is derived from Genetic Algorithms. Most of the scheduling problems require either exponential time or space to generate an optimal answer. Job Shop scheduling (JSS) is the general scheduling problem and it is a NP-complete problem, but it is difficult to find the optimal solution. This paper applies Genetic Algorithms and Tabu Search for Job Shop Scheduling problem and compares the results obtained by each. With the implementation of our approach the JSS problems reaches optimal solution and minimize the makespan.
Integrating Genetic Algorithm, Tabu Search Approach for Job Shop Scheduling
4,278
We study the maximum weight matching problem in the semi-streaming model, and improve on the currently best one-pass algorithm due to Zelke (Proc. of STACS2008, pages 669-680) by devising a deterministic approach whose performance guarantee is 4.91+epsilon. In addition, we study preemptive online algorithms, a sub-class of one-pass algorithms where we are only allowed to maintain a feasible matching in memory at any point in time. All known results prior to Zelke's belong to this sub-class. We provide a lower bound of 4.967 on the competitive ratio of any such deterministic algorithm, and hence show that future improvements will have to store in memory a set of edges which is not necessarily a feasible matching.
Improved approximation guarantees for weighted matching in the semi-streaming model
4,279
We present a randomized parallel algorithm that computes the greatest common divisor of two integers of n bits in length with probability 1-o(1) that takes O(n loglog n / log n) expected time using n^{6+\epsilon} processors on the EREW PRAM parallel model of computation. We believe this to be the first randomized sublinear time algorithm on the EREW PRAM for this problem.
A Randomized Sublinear Time Parallel GCD Algorithm for the EREW PRAM
4,280
We study integrality gaps and approximability of two closely related problems on directed graphs. Given a set V of n nodes in an underlying asymmetric metric and two specified nodes s and t, both problems ask to find an s-t path visiting all other nodes. In the asymmetric traveling salesman path problem (ATSPP), the objective is to minimize the total cost of this path. In the directed latency problem, the objective is to minimize the sum of distances on this path from s to each node. Both of these problems are NP-hard. The best known approximation algorithms for ATSPP had ratio O(log n) until the very recent result that improves it to O(log n/ log log n). However, only a bound of O(sqrt(n)) for the integrality gap of its linear programming relaxation has been known. For directed latency, the best previously known approximation algorithm has a guarantee of O(n^(1/2+eps)), for any constant eps > 0. We present a new algorithm for the ATSPP problem that has an approximation ratio of O(log n), but whose analysis also bounds the integrality gap of the standard LP relaxation of ATSPP by the same factor. This solves an open problem posed by Chekuri and Pal [2007]. We then pursue a deeper study of this linear program and its variations, which leads to an algorithm for the k-person ATSPP (where k s-t paths of minimum total length are sought) and an O(log n)-approximation for the directed latency problem.
Asymmetric Traveling Salesman Path and Directed Latency Problems
4,281
Although many authors have considered how many ternary comparisons it takes to sort a multiset $S$ of size $n$, the best known upper and lower bounds still differ by a term linear in $n$. In this paper we restrict our attention to online stable sorting and prove upper and lower bounds that are within (o (n)) not only of each other but also of the best known upper bound for offline sorting. Specifically, we first prove that if the number of distinct elements (\sigma = o (n / \log n)), then ((H + 1) n + o (n)) comparisons are sufficient, where $H$ is the entropy of the distribution of the elements in $S$. We then give a simple proof that ((H + 1) n - o (n)) comparisons are necessary in the worst case.
Tight Bounds for Online Stable Sorting
4,282
In this paper, we present a framework based on a simple data structure and parameterized algorithms for the problems of finding items in an unsorted list of linearly ordered items based on their rank (selection) or value (search). As a side-effect of answering these online selection and search queries, we progressively sort the list. Our algorithms are based on Hoare's Quickselect, and are parameterized based on the pivot selection method. For example, if we choose the pivot as the last item in a subinterval, our framework yields algorithms that will answer q<=n unique selection and/or search queries in a total of O(n log q) average time. After q=\Omega(n) queries the list is sorted. Each repeated selection query takes constant time, and each repeated search query takes O(log n) time. The two query types can be interleaved freely. By plugging different pivot selection methods into our framework, these results can, for example, become randomized expected time or deterministic worst-case time. Our methods are easy to implement, and we show they perform well in practice.
Online Sorting via Searching and Selection
4,283
The {\sc $c$-Balanced Separator} problem is a graph-partitioning problem in which given a graph $G$, one aims to find a cut of minimum size such that both the sides of the cut have at least $cn$ vertices. In this paper, we present new directions of progress in the {\sc $c$-Balanced Separator} problem. More specifically, we propose a new family of mathematical programs, which depends upon a parameter $\epsilon > 0$, and extend the seminal work of Arora-Rao-Vazirani ({\sf ARV}) \cite{ARV} to show that the polynomial time solvability of the proposed family of programs implies an improvement in the approximation factor to $O(\log^{{1/3} + \epsilon} n)$ from the best-known factor of $O(\sqrt{\log n})$ due to {\sf ARV}. In fact, for $\epsilon = 1/3$, the program we get is the SDP proposed by {\sf ARV}. For $\epsilon < 1/3$, this family of programs is not convex but one can transform them into so called \emph{\textbf{concave programs}} in which one optimizes a concave function over a convex feasible set. The properties of concave programs allows one to apply techniques due to Hoffman \cite{H81} or Tuy \emph{et al} \cite{TTT85} to solve such problems with arbitrary accuracy. But the problem of finding of a method to solve these programs that converges in polynomial time still remains open. Our result, although conditional, introduces a new family of programs which is more powerful than semi-definite programming in the context of approximation algorithms and hence it will of interest to investigate this family both in the direction of designing efficient algorithms and proving hardness results.
Towards an $O(\sqrt[3]{\log n})$-Approximation Algorithm for {\sc Balanced Separator}
4,284
The Consensus Clustering problem has been introduced as an effective way to analyze the results of different microarray experiments. The problem consists of looking for a partition that best summarizes a set of input partitions (each corresponding to a different microarray experiment) under a simple and intuitive cost function. The problem admits polynomial time algorithms on two input partitions, but is APX-hard on three input partitions. We investigate the restriction of Consensus Clustering when the output partition is required to contain at most k sets, giving a polynomial time approximation scheme (PTAS) while proving the NP-hardness of this restriction.
A PTAS for the Minimum Consensus Clustering Problem with a Fixed Number of Clusters
4,285
We give a new analysis of the RMix algorithm by Chin et al. for the Buffer Management with Bounded Delay problem (or online scheduling of unit jobs to maximise weighted throughput). Unlike the original proof of e/(e-1)-competitiveness, the new one holds even in adaptive-online adversary model. In fact, the proof works also for a slightly more general problem studied by Bie{\'n}kowski et al.
Randomised Buffer Management with Bounded Delay against Adaptive Adversary
4,286
The working-set bound [Sleator and Tarjan, J. ACM, 1985] roughly states that searching for an element is fast if the element was accessed recently. Binary search trees, such as splay trees, can achieve this property in the amortized sense, while data structures that are not binary search trees are known to have this property in the worst case. We close this gap and present a binary search tree called a layered working-set tree that guarantees the working-set property in the worst case. The unified bound [Badoiu et al., TCS, 2007] roughly states that searching for an element is fast if it is near (in terms of rank distance) to a recently accessed element. We show how layered working-set trees can be used to achieve the unified bound to within a small additive term in the amortized sense while maintaining in the worst case an access time that is both logarithmic and within a small multiplicative factor of the working-set bound.
Layered Working-Set Trees
4,287
We study the problem of buffer management in QoS-enabled network switches in the bounded delay model where each packet is associated with a weight and a deadline. We consider the more realistic situation where the network switch has a finite buffer size. A 9.82-competitive algorithm is known for the case of multiple buffers (Azar and Levy, SWAT'06). Recently, for the case of a single buffer, a 3-competitive deterministic algorithm and a 2.618-competitive randomized algorithm was known (Li, INFOCOM'09). In this paper we give a simple deterministic 2-competitive algorithm for the case of a single buffer.
Bounded Delay Packet Scheduling in a Bounded Buffer
4,288
Given strings $P$ and $Q$ the (exact) string matching problem is to find all positions of substrings in $Q$ matching $P$. The classical Knuth-Morris-Pratt algorithm [SIAM J. Comput., 1977] solves the string matching problem in linear time which is optimal if we can only read one character at the time. However, most strings are stored in a computer in a packed representation with several characters in a single word, giving us the opportunity to read multiple characters simultaneously. In this paper we study the worst-case complexity of string matching on strings given in packed representation. Let $m \leq n$ be the lengths $P$ and $Q$, respectively, and let $\sigma$ denote the size of the alphabet. On a standard unit-cost word-RAM with logarithmic word size we present an algorithm using time $$ O\left(\frac{n}{\log_\sigma n} + m + \occ\right). $$ Here $\occ$ is the number of occurrences of $P$ in $Q$. For $m = o(n)$ this improves the $O(n)$ bound of the Knuth-Morris-Pratt algorithm. Furthermore, if $m = O(n/\log_\sigma n)$ our algorithm is optimal since any algorithm must spend at least $\Omega(\frac{(n+m)\log \sigma}{\log n} + \occ) = \Omega(\frac{n}{\log_\sigma n} + \occ)$ time to read the input and report all occurrences. The result is obtained by a novel automaton construction based on the Knuth-Morris-Pratt algorithm combined with a new compact representation of subautomata allowing an optimal tabulation-based simulation.
Fast Searching in Packed Strings
4,289
Todays ride sharing services still mimic a better billboard. They list the offers and allow to search for the source and target city, sometimes enriched with radial search. So finding a connection between big cities is quite easy. These places are on a list of designated origin and distination points. But when you want to go from a small town to another small town, even when they are next to a freeway, you run into problems. You can't find offers that would or could pass by the town easily with little or no detour. We solve this interesting problem by presenting a fast algorithm that computes the offers with the smallest detours w.r.t. a request. Our experiments show that the problem is efficiently solvable in times suitable for a web service implementation. For realistic database size we achieve lookup times of about 5ms and a matching rate of 90% instead of just 70% for the simple matching algorithms used today.
Fast Detour Computation for Ride Sharing
4,290
The input to the unrooted traveling repairman problem is an undirected metric graph and a subset of nodes, each of which has a time window of unit length. Given that a repairman can start at any location, the goal is to plan a route that visits as many nodes as possible during their respective time windows. A polynomial-time bicriteria approximation algorithm is presented for this problem, gaining an increased fraction of repairman visits for increased speedup of repairman motion. For speedup $s$, we find a $6\gamma/(s + 1)$-approximation for $s$ in the range $1 \leq s \leq 2$ and a $4\gamma/s$-approximation for $s$ in the range $2 \leq s \leq 4$, where $\gamma = 1$ on tree-shaped networks and $\gamma = 2 + \epsilon$ on general metric graphs.
Speedup in the Traveling Repairman Problem with Unit Time Windows
4,291
Let G be a graph that may be drawn in the plane in such a way that all internal faces are centrally symmetric convex polygons. We show how to find a drawing of this type that maximizes the angular resolution of the drawing, the minimum angle between any two incident edges, in polynomial time, by reducing the problem to one of finding parametric shortest paths in an auxiliary graph. The running time is at most O(t^3), where t is a parameter of the input graph that is at most O(n) but is more typically proportional to n^.5.
Optimal Angular Resolution for Face-Symmetric Drawings
4,292
The sort transform (ST) is a modification of the Burrows-Wheeler transform (BWT). Both transformations map an arbitrary word of length n to a pair consisting of a word of length n and an index between 1 and n. The BWT sorts all rotation conjugates of the input word, whereas the ST of order k only uses the first k letters for sorting all such conjugates. If two conjugates start with the same prefix of length k, then the indices of the rotations are used for tie-breaking. Both transforms output the sequence of the last letters of the sorted list and the index of the input within the sorted list. In this paper, we discuss a bijective variant of the BWT (due to Scott), proving its correctness and relations to other results due to Gessel and Reutenauer (1993) and Crochemore, Desarmenien, and Perrin (2005). Further, we present a novel bijective variant of the ST.
On Bijective Variants of the Burrows-Wheeler Transform
4,293
We study a number of multi-route cut problems: given a graph G=(V,E) and connectivity thresholds k_(u,v) on pairs of nodes, the goal is to find a minimum cost set of edges or vertices the removal of which reduces the connectivity between every pair (u,v) to strictly below its given threshold. These problems arise in the context of reliability in communication networks; They are natural generalizations of traditional minimum cut problems where the thresholds are either 1 (we want to completely separate the pair) or infinity (we don't care about the connectivity for the pair). We provide the first non-trivial approximations to a number of variants of the problem including for both node-disjoint and edge-disjoint connectivity thresholds. A main contribution of our work is an extension of the region growing technique for approximating minimum multicuts to the multi-route setting. When the connectivity thresholds are either 2 or infinity (the "2-route cut" case), we obtain polylogarithmic approximations while satisfying the thresholds exactly. For arbitrary connectivity thresholds this approach leads to bicriteria approximations where we approximately satisfy the thresholds and approximately minimize the cost. We present a number of different algorithms achieving different cost-connectivity tradeoffs.
Region growing for multi-route cuts
4,294
The Lovasz Local Lemma (LLL) is a powerful result in probability theory that states that the probability that none of a set of bad events happens is nonzero if the probability of each event is small compared to the number of events that depend on it. It is often used in combination with the probabilistic method for non-constructive existence proofs. A prominent application is to k-CNF formulas, where LLL implies that, if every clause in the formula shares variables with at most d <= 2^k/e other clauses then such a formula has a satisfying assignment. Recently, a randomized algorithm to efficiently construct a satisfying assignment was given by Moser. Subsequently Moser and Tardos gave a randomized algorithm to construct the structures guaranteed by the LLL in a very general algorithmic framework. We address the main problem left open by Moser and Tardos of derandomizing these algorithms efficiently. Specifically, for a k-CNF formula with m clauses and d <= 2^{k/(1+\eps)}/e for any \eps\in (0,1), we give an algorithm that finds a satisfying assignment in time \tilde{O}(m^{2(1+1/\eps)}). This improves upon the deterministic algorithms of Moser and of Moser-Tardos with running time m^{\Omega(k^2)} which is superpolynomial for k=\omega(1) and upon other previous algorithms which work only for d\leq 2^{k/16}/4. Our algorithm works efficiently for a general version of LLL under the algorithmic framework of Moser and Tardos, and is also parallelizable, i.e., has polylogarithmic running time using polynomially many processors.
Deterministic Algorithms for the Lovasz Local Lemma
4,295
This paper ties the line of work on algorithms that find an O(sqrt(log(n)))-approximation to the sparsest cut together with the line of work on algorithms that run in sub-quadratic time by using only single-commodity flows. We present an algorithm that simultaneously achieves both goals, finding an O(sqrt(log(n)/eps))-approximation using O(n^eps log^O(1) n) max-flows. The core of the algorithm is a stronger, algorithmic version of Arora et al.'s structure theorem, where we show that matching-chaining argument at the heart of their proof can be viewed as an algorithm that finds good augmenting paths in certain geometric multicommodity flow networks. By using that specialized algorithm in place of a black-box solver, we are able to solve those instances much more efficiently. We also show the cut-matching game framework can not achieve an approximation any better than Omega(log(n)/log log(n)) without re-routing flow.
Breaking the Multicommodity Flow Barrier for sqrt(log(n))-Approximations to Sparsest Cut
4,296
We successfully contract timetable networks with realistic transfer times. Contraction gradually removes nodes from the graph and adds shortcuts to preserve shortest paths. This reduces query times to 1 ms with preprocessing times around 6 minutes on all tested instances. We achieve this by an improved contraction algorithm and by using a station graph model. Every node in our graph has a one-to-one correspondence to a station and every edge has an assigned collection of connections. Our graph model does not need parallel edges. The query algorithm does not compute a single earliest arrival time at a station but a set of arriving connections that allow best transfer opportunities.
Contraction of Timetable Networks with Realistic Transfers
4,297
We consider the class of packing integer programs (PIPs) that are column sparse, i.e. there is a specified upper bound k on the number of constraints that each variable appears in. We give an (ek+o(k))-approximation algorithm for k-column sparse PIPs, improving on recent results of $k^2\cdot 2^k$ and $O(k^2)$. We also show that the integrality gap of our linear programming relaxation is at least 2k-1; it is known that k-column sparse PIPs are $\Omega(k/ \log k)$-hard to approximate. We also extend our result (at the loss of a small constant factor) to the more general case of maximizing a submodular objective over k-column sparse packing constraints.
On k-Column Sparse Packing Programs
4,298
Most recent papers addressing the algorithmic problem of allocating advertisement space for keywords in sponsored search auctions assume that pricing is done via a first-price auction, which does not realistically model the Generalized Second Price (GSP) auction used in practice. Towards the goal of more realistically modeling these auctions, we introduce the Second-Price Ad Auctions problem, in which bidders' payments are determined by the GSP mechanism. We show that the complexity of the Second-Price Ad Auctions problem is quite different than that of the more studied First-Price Ad Auctions problem. First, unlike the first-price variant, for which small constant-factor approximations are known, it is NP-hard to approximate the Second-Price Ad Auctions problem to any non-trivial factor. Second, this discrepancy extends even to the 0-1 special case that we call the Second-Price Matching problem (2PM). In particular, offline 2PM is APX-hard, and for online 2PM there is no deterministic algorithm achieving a non-trivial competitive ratio and no randomized algorithm achieving a competitive ratio better than 2. This stands in contrast to the results for the analogous special case in the first-price model, the standard bipartite matching problem, which is solvable in polynomial time and which has deterministic and randomized online algorithms achieving better competitive ratios. On the positive side, we provide a 2-approximation for offline 2PM and a 5.083-competitive randomized algorithm for online 2PM. The latter result makes use of a new generalization of a classic result on the performance of the "Ranking" algorithm for online bipartite matching.
On Revenue Maximization in Second-Price Ad Auctions
4,299