text stringlengths 17 3.36M | source stringlengths 3 333 | __index_level_0__ int64 0 518k |
|---|---|---|
In this paper we describe data structures for orthogonal range reporting in external memory that support fast update operations. The query costs either match the query costs of the best previously known data structures or differ by a small multiplicative factor. | External Memory Orthogonal Range Reporting with Fast Updates | 4,700 |
In this paper, we present a number of network-analysis algorithms in the external-memory model. We focus on methods for large naturally sparse graphs, that is, n-vertex graphs that have O(n) edges and are structured so that this sparsity property holds for any subgraph of such a graph. We give efficient external-memory algorithms for the following problems for such graphs: - Finding an approximate d-degeneracy ordering; - Finding a cycle of length exactly c; - Enumerating all maximal cliques. Such problems are of interest, for example, in the analysis of social networks, where they are used to study network cohesion. | External-Memory Network Analysis Algorithms for Naturally Sparse Graphs | 4,701 |
The problem of finding a longest common subsequence of two main sequences with some constraint that must be a substring of the result (STR-IC-LCS) was formulated recently. It is a variant of the constrained longest common subsequence problem. As the known algorithms for the STR-IC-LCS problem are cubic-time, the presented quadratic-time algorithm is significantly faster. | Quadratic-time Algorithm for the String Constrained LCS Problem | 4,702 |
We study an important practical aspect of the route planning problem in real-world road networks -- maneuvers. Informally, maneuvers represent various irregularities of the road network graph such as turn-prohibitions, traffic light delays, round-abouts, forbidden passages and so on. We propose a generalized model which can handle arbitrarily complex (and even negative) maneuvers, and outline how to enhance Dijkstra's algorithm in order to solve route planning queries in this model without prior adjustments of the underlying road network graph. | Generalized Maneuvers in Route Planning | 4,703 |
We address the problem of finding the minimal number of block interchanges (exchange of two intervals) required to transform a duplicated linear genome into a tandem duplicated linear genome. We provide a formula for the distance as well as a polynomial time algorithm for the sorting problem. | Genome Halving by Block Interchange | 4,704 |
We study the lift-and-project procedures of Lov{\'a}sz-Schrijver and Sherali-Adams applied to the standard linear programming relaxation of the traveling salesperson problem with triangle inequality. For the asymmetric TSP tour problem, Charikar, Goemans, and Karloff (FOCS 2004) proved that the integrality gap of the standard relaxation is at least 2. We prove that after one round of the Lov{\'a}sz-Schrijver or Sherali-Adams procedures, the integrality gap of the asymmetric TSP tour problem is at least 3/2, with a small caveat on which version of the standard relaxation is used. For the symmetric TSP tour problem, the integrality gap of the standard relaxation is known to be at least 4/3, and Cheung (SIOPT 2005) proved that it remains at least 4/3 after $o(n)$ rounds of the Lov{\'a}sz-Schrijver procedure, where $n$ is the number of nodes. For the symmetric TSP path problem, the integrality gap of the standard relaxation is known to be at least 3/2, and we prove that it remains at least 3/2 after $o(n)$ rounds of the Lov{\'a}sz-Schrijver procedure, by a simple reduction to Cheung's result. | Lift-and-Project Integrality Gaps for the Traveling Salesperson Problem | 4,705 |
In this paper we consider two above lower bound parameterizations of the Node Multiway Cut problem - above the maximum separating cut and above a natural LP-relaxation - and prove them to be fixed-parameter tractable. Our results imply O*(4^k) algorithms for Vertex Cover above Maximum Matching and Almost 2-SAT as well as an O*(2^k) algorithm for Node Multiway Cut with a standard parameterization by the solution size, improving previous bounds for these problems. | On Multiway Cut parameterized above lower bounds | 4,706 |
Determining the precise integrality gap for the subtour LP relaxation of the traveling salesman problem is a significant open question, with little progress made in thirty years in the general case of symmetric costs that obey triangle inequality. Boyd and Carr [3] observe that we do not even know the worst-case upper bound on the ratio of the optimal 2-matching to the subtour LP; they conjecture the ratio is at most 10/9. In this paper, we prove the Boyd-Carr conjecture. In the case that a fractional 2-matching has no cut edge, we can further prove that an optimal 2-matching is at most 10/9 times the cost of the fractional 2-matching. | A Proof of the Boyd-Carr Conjecture | 4,707 |
In this paper, we study the integrality gap of the subtour LP relaxation for the traveling salesman problem in the special case when all edge costs are either 1 or 2. For the general case of symmetric costs that obey triangle inequality, a famous conjecture is that the integrality gap is 4/3. Little progress towards resolving this conjecture has been made in thirty years. We conjecture that when all edge costs $c_{ij}\in \{1,2\}$, the integrality gap is $10/9$. We show that this conjecture is true when the optimal subtour LP solution has a certain structure. Under a weaker assumption, which is an analog of a recent conjecture by Schalekamp, Williamson and van Zuylen, we show that the integrality gap is at most $7/6$. When we do not make any assumptions on the structure of the optimal subtour LP solution, we can show that integrality gap is at most $5/4$; this is the first bound on the integrality gap of the subtour LP strictly less than $4/3$ known for an interesting special case of the TSP. We show computationally that the integrality gap is at most $10/9$ for all instances with at most 12 cities. | On the Integrality Gap of the Subtour LP for the 1,2-TSP | 4,708 |
In this paper, we consider the generalized min-sum set cover problem, introduced by Azar, Gamzu, and Yin. Bansal, Gupta, and Krishnaswamy give a 485-approximation algorithm for the problem. We are able to alter their algorithm and analysis to obtain a 28-approximation algorithm, improving the performance guarantee by an order of magnitude. We use concepts from $\alpha$-point scheduling to obtain our improvements. | A note on the generalized min-sum set cover problem | 4,709 |
We study the problem of scheduling a set of jobs with release dates, deadlines and processing requirements (or works), on parallel speed-scaled processors so as to minimize the total energy consumption. We consider that both preemption and migration of jobs are allowed. An exact polynomial-time algorithm has been proposed for this problem, which is based on the Ellipsoid algorithm. Here, we formulate the problem as a convex program and we propose a simpler polynomial-time combinatorial algorithm which is based on a reduction to the maximum flow problem. Our algorithm runs in $O(nf(n)logP)$ time, where $n$ is the number of jobs, $P$ is the range of all possible values of processors' speeds divided by the desired accuracy and $f(n)$ is the complexity of computing a maximum flow in a layered graph with O(n) vertices. Independently, Albers et al. \cite{AAG11} proposed an $O(n^2f(n))$-time algorithm exploiting the same relation with the maximum flow problem. We extend our algorithm to the multiprocessor speed scaling problem with migration where the objective is the minimization of the makespan under a budget of energy. | Speed Scaling on Parallel Processors with Migration | 4,710 |
We study the matroid secretary problems with submodular valuation functions. In these problems, the elements arrive in random order. When one element arrives, we have to make an immediate and irrevocable decision on whether to accept it or not. The set of accepted elements must form an {\em independent set} in a predefined matroid. Our objective is to maximize the value of the accepted elements. In this paper, we focus on the case that the valuation function is a non-negative and monotonically non-decreasing submodular function. We introduce a general algorithm for such {\em submodular matroid secretary problems}. In particular, we obtain constant competitive algorithms for the cases of laminar matroids and transversal matroids. Our algorithms can be further applied to any independent set system defined by the intersection of a {\em constant} number of laminar matroids, while still achieving constant competitive ratios. Notice that laminar matroids generalize uniform matroids and partition matroids. On the other hand, when the underlying valuation function is linear, our algorithm achieves a competitive ratio of 9.6 for laminar matroids, which significantly improves the previous result. | The Simulated Greedy Algorithm for Several Submodular Matroid Secretary
Problems | 4,711 |
A seed in a word is a relaxed version of a period in which the occurrences of the repeating subword may overlap. We show a linear-time algorithm computing a linear-size representation of all the seeds of a word (the number of seeds might be quadratic). In particular, one can easily derive the shortest seed and the number of seeds from our representation. Thus, we solve an open problem stated in the survey by Smyth (2000) and improve upon a previous O(n log n) algorithm by Iliopoulos, Moore, and Park (1996). Our approach is based on combinatorial relations between seeds and subword complexity (used here for the first time in context of seeds). In the previous papers, the compact representation of seeds consisted of two independent parts operating on the suffix tree of the word and the suffix tree of the reverse of the word, respectively. Our second contribution is a simpler representation of all seeds which avoids dealing with the reversed word. A preliminary version of this work, with a much more complex algorithm constructing the earlier representation of seeds, was presented at the 23rd Annual ACM-SIAM Symposium of Discrete Algorithms (SODA 2012). | A Linear Time Algorithm for Seeds Computation | 4,712 |
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \sqrt{n})$ time due to Micali and Vazirani \cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \log n)$ by Goel, Kapralov and Khanna (STOC 2010) \cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \log^2 n)$ time, thereby obtaining a significant improvement over \cite{MV80}. We use a Markov chain similar to the \emph{hard-core model} for Glauber Dynamics with \emph{fugacity} parameter $\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \cite{V99}, to design a faster algorithm for finding maximum matchings in general graphs. Our result crucially relies on the fact that the mixing time of our Markov Chain is independent of $\lambda$, a significant deviation from the recent series of works \cite{GGSVY11,MWW09, RSVVY10, S10, W06} which achieve computational transition (for estimating the partition function) on a threshold value of $\lambda$. As a result we are able to design a randomized algorithm which runs in $O(m\log^2 n)$ time that provides a major improvement over the running time of the algorithm due to Micali and Vazirani. Using the conductance bound, we also prove that mixing takes $\Omega(\frac{m}{k})$ time where $k$ is the size of the maximum matching. | Maximum Matchings via Glauber Dynamics | 4,713 |
Matching pursuits are a class of greedy algorithms commonly used in signal processing, for solving the sparse approximation problem. They rely on an atom selection step that requires the calculation of numerous projections, which can be computationally costly for large dictionaries and burdens their competitiveness in coding applications. We propose using a non adaptive random sequence of subdictionaries in the decomposition process, thus parsing a large dictionary in a probabilistic fashion with no additional projection cost nor parameter estimation. A theoretical modeling based on order statistics is provided, along with experimental evidence showing that the novel algorithm can be efficiently used on sparse approximation problems. An application to audio signal compression with multiscale time-frequency dictionaries is presented, along with a discussion of the complexity and practical implementations. | Matching Pursuits with Random Sequential Subdictionaries | 4,714 |
Given an undirected graph G=(V,E), a collection (s_1,t_1),...,(s_k,t_k) of k source-sink pairs, and an integer c, the goal in the Edge Disjoint Paths with Congestion problem is to connect maximum possible number of the source-sink pairs by paths, so that the maximum load on any edge (called edge congestion) does not exceed c. We show an efficient randomized algorithm to route $\Omega(OPT/\poly\log k)$ source-sink pairs with congestion at most 14, where OPT is the maximum number of pairs that can be simultaneously routed on edge-disjoint paths. The best previous algorithm that routed $\Omega(OPT/\poly\log n)$ pairs required congestion $\poly(\log \log n)$, and for the setting where the maximum allowed congestion is bounded by a constant c, the best previous algorithms could only guarantee the routing of $OPT/n^{O(1/c)}$ pairs. | Routing in Undirected Graphs with Constant Congestion | 4,715 |
A basic fact in algebraic graph theory is that the number of connected components in an undirected graph is equal to the multiplicity of the eigenvalue 1 in the normalized adjacency matrix of the graph. In particular, the graph is disconnected if and only if there are at least two eigenvalues equal to 1. Cheeger's inequality provides an "approximate" version of the latter fact, and it states that a graph has a sparse cut (it is "almost disconnected") if and only if there are at least two eigenvalues that are close to one. It has been conjectured that an analogous characterization holds for higher multiplicities, that is there are $k$ eigenvalues close to 1 if and only if the vertex set can be partitioned into $k$ subsets, each defining a sparse cut. In this paper we resolve this conjecture. Our result provides a theoretical justification for clustering algorithms that use the top $k$ eigenvector to embed the vertices into $\R^k$, and then apply geometric considerations to the embedding. | A Higher-Order Cheeger's Inequality | 4,716 |
We consider the problem of {\em restructuring} compressed texts without explicit decompression. We present algorithms which allow conversions from compressed representations of a string $T$ produced by any grammar-based compression algorithm, to representations produced by several specific compression algorithms including LZ77, LZ78, run length encoding, and some grammar based compression algorithms. These are the first algorithms that achieve running times polynomial in the size of the compressed input and output representations of $T$. Since most of the representations we consider can achieve exponential compression, our algorithms are theoretically faster in the worst case, than any algorithm which first decompresses the string for the conversion. | Restructuring Compressed Texts without Explicit Decompression | 4,717 |
Collage systems are a general framework for representing outputs of various text compression algorithms. We consider the all $q$-gram frequency problem on compressed string represented as a collage system, and present an $O((q+h\log n)n)$-time $O(qn)$-space algorithm for calculating the frequencies for all $q$-grams that occur in the string. Here, $n$ and $h$ are respectively the size and height of the collage system. | Computing q-gram Frequencies on Collage Systems | 4,718 |
Length-$q$ substrings, or $q$-grams, can represent important characteristics of text data, and determining the frequencies of all $q$-grams contained in the data is an important problem with many applications in the field of data mining and machine learning. In this paper, we consider the problem of calculating the {\em non-overlapping frequencies} of all $q$-grams in a text given in compressed form, namely, as a straight line program (SLP). We show that the problem can be solved in $O(q^2n)$ time and $O(qn)$ space where $n$ is the size of the SLP. This generalizes and greatly improves previous work (Inenaga & Bannai, 2009) which solved the problem only for $q=2$ in $O(n^4\log n)$ time and $O(n^3)$ space. | Computing q-gram Non-overlapping Frequencies on SLP Compressed Texts | 4,719 |
Sundararajan and Chakraborty (2007) introduced a new version of Quick sort removing the interchanges. Khreisat (2007) found this algorithm to be competing well with some other versions of Quick sort. However, it uses an auxiliary array thereby increasing the space complexity. Here, we provide a second version of our new sort where we have removed the auxiliary array. This second improved version of the algorithm, which we call K-sort, is found to sort elements faster than Heap sort for an appreciably large array size (n <= 70,00,000) for uniform U[0, 1] inputs. | K-sort: A new sorting algorithm that beats Heap sort for n <= 70 lakhs! | 4,720 |
The Odd Cycle Transversal problem (OCT) asks whether a given graph can be made bipartite (i.e., 2-colorable) by deleting at most l vertices. We study structural parameterizations of OCT with respect to their polynomial kernelizability, i.e., whether instances can be efficiently reduced to a size polynomial in the chosen parameter. It is a major open problem in parameterized complexity whether Odd Cycle Transversal admits a polynomial kernel when parameterized by l. On the positive side, we show a polynomial kernel for OCT when parameterized by the vertex deletion distance to the class of bipartite graphs of treewidth at most w (for any constant w); this generalizes the parameter feedback vertex set number (i.e., the distance to a forest). Complementing this, we exclude polynomial kernels for OCT parameterized by the distance to outerplanar graphs, conditioned on the assumption that NP \not \subseteq coNP/poly. Thus the bipartiteness requirement for the treewidth w graphs is necessary. Further lower bounds are given for parameterization by distance from cluster and co-cluster graphs respectively, as well as for Weighted OCT parameterized by the vertex cover number (i.e., the distance from an independent set). | On Polynomial Kernels for Structural Parameterizations of Odd Cycle
Transversal | 4,721 |
Until recently, techniques for obtaining lower bounds for kernelization were one of the most sought after tools in the field of parameterized complexity. Now, after a strong influx of techniques, we are in the fortunate situation of having tools available that are even stronger than what has been required in their applications so far. Based on a result of Fortnow and Santhanam (JCSS 2011), Bodlaender et al. (JCSS 2009) showed that, unless NP \subseteq coNP/poly, the existence of a deterministic polynomial-time composition algorithm, i.e., an algorithm which outputs an instance of bounded parameter value which is yes if and only if one of t input instances is yes, rules out the existence of polynomial kernels for a problem. Dell and van Melkebeek (STOC 2010) continued this line of research and, amongst others, were able to rule out kernels of size O(k^d-eps) for certain problems, assuming NP !\subseteq coNP/poly. Their work implies that even the existence of a co-nondeterministic composition rules out polynomial kernels. In this work we present the first example of how co-nondeterminism can help to make a composition algorithm. We study a Ramsey-type problem: Given a graph G and an integer k, the question is whether G contains an independent set or a clique of size at least k. It was asked by Rod Downey whether this problem admits a polynomial kernelization. We provide a co-nondeterministic composition based on embedding t instances into a single host graph H. The crux is that the host graph H needs to observe a bound of L \in O(log t) on both its maximum independent set and maximum clique size, while also having a cover of its vertex set by independent sets and cliques all of size L; the co-nondeterministic composition is build around the search for such graphs. Thus we show that, unless NP \subseteq coNP/poly, the problem does not admit a kernelization with polynomial size guarantee. | Co-nondeterminism in compositions: A kernelization lower bound for a
Ramsey-type problem | 4,722 |
Smoothed analysis of multiobjective 0-1 linear optimization has drawn considerable attention recently. The number of Pareto-optimal solutions (i.e., solutions with the property that no other solution is at least as good in all the coordinates and better in at least one) for multiobjective optimization problems is the central object of study. In this paper, we prove several lower bounds for the expected number of Pareto optima. Our basic result is a lower bound of \Omega_d(n^(d-1)) for optimization problems with d objectives and n variables under fairly general conditions on the distributions of the linear objectives. Our proof relates the problem of lower bounding the number of Pareto optima to results in geometry connected to arrangements of hyperplanes. We use our basic result to derive (1) To our knowledge, the first lower bound for natural multiobjective optimization problems. We illustrate this for the maximum spanning tree problem with randomly chosen edge weights. Our technique is sufficiently flexible to yield such lower bounds for other standard objective functions studied in this setting (such as, multiobjective shortest path, TSP tour, matching). (2) Smoothed lower bound of min {\Omega_d(n^(d-1.5) \phi^{(d-log d) (1-\Theta(1/\phi))}), 2^{\Theta(n)}}$ for the 0-1 knapsack problem with d profits for phi-semirandom distributions for a version of the knapsack problem. This improves the recent lower bound of Brunsch and Roeglin. | Lower Bounds for the Average and Smoothed Number of Pareto Optima | 4,723 |
2-joins are edge cutsets that naturally appear in the decomposition of several classes of graphs closed under taking induced subgraphs, such as balanced bipartite graphs, even-hole-free graphs, perfect graphs and claw-free graphs. Their detection is needed in several algorithms, and is the slowest step for some of them. The classical method to detect a 2-join takes $O(n^3m)$ time where $n$ is the number of vertices of the input graph and $m$ the number of its edges. To detect \emph{non-path} 2-joins (special kinds of 2-joins that are needed in all of the known algorithms that use 2-joins), the fastest known method takes time $O(n^4m)$. Here, we give an $O(n^2m)$-time algorithm for both of these problems. A consequence is a speed up of several known algorithms. | Detecting 2-joins faster | 4,724 |
Sorting is one of the most used and well investigated algorithmic problem [1]. Traditional postulation supposes the sorting data archived, and the elementary operation as comparisons of two numbers. In a view of appearance of new processors and applied problems with data streams, sorting changed its face. This changes and generalizations are the subject of investigation in the research below. | Sorting Algorithms with Restrictions | 4,725 |
A dictionary (or map) is a key-value store that requires all keys be unique, and a multimap is a key-value store that allows for multiple values to be associated with the same key. We design hashing-based indexing schemes for dictionaries and multimaps that achieve worst-case optimal performance for lookups and updates, with a small or negligible probability the data structure will require a rehash operation, depending on whether we are working in the the external-memory (I/O) model or one of the well-known versions of the Random Access Machine (RAM) model. One of the main features of our constructions is that they are \emph{fully de-amortized}, meaning that their performance bounds hold without one having to tune their constructions with certain performance parameters, such as the constant factors in the exponents of failure probabilities or, in the case of the external-memory model, the size of blocks or cache lines and the size of internal memory (i.e., our external-memory algorithms are cache oblivious). Our solutions are based on a fully de-amortized implementation of cuckoo hashing, which may be of independent interest. This hashing scheme uses two cuckoo hash tables, one "nested" inside the other, with one serving as a primary structure and the other serving as an auxiliary supporting queue/stash structure that is super-sized with respect to traditional auxiliary structures but nevertheless adds negligible storage to our scheme. This auxiliary structure allows the success probability for cuckoo hashing to be very high, which is useful in cryptographic or data-intensive applications. | Fully De-Amortized Cuckoo Hashing for Cache-Oblivious Dictionaries and
Multimaps | 4,726 |
We show that there is a polynomial space algorithm that counts the number of perfect matchings in an $n$-vertex graph in $O^*(2^{n/2})\subset O(1.415^n)$ time. ($O^*(f(n))$ suppresses functions polylogarithmic in $f(n)$).The previously fastest algorithms for the problem was the exponential space $O^*(((1+\sqrt{5})/2)^n) \subset O(1.619^n)$ time algorithm by Koivisto, and for polynomial space, the $O(1.942^n)$ time algorithm by Nederlof. Our new algorithm's runtime matches up to polynomial factors that of Ryser's 1963 algorithm for bipartite graphs. We present our algorithm in the more general setting of computing the hafnian over an arbitrary ring, analogously to Ryser's algorithm for permanent computation. We also give a simple argument why the general exact set cover counting problem over a slightly superpolynomial sized family of subsets of an $n$ element ground set cannot be solved in $O^*(2^{(1-\epsilon_1)n})$ time for any $\epsilon_1>0$ unless there are $O^*(2^{(1-\epsilon_2)n})$ time algorithms for computing an $n\times n$ 0/1 matrix permanent, for some $\epsilon_2>0$ depending only on $\epsilon_1$. | Counting Perfect Matchings as Fast as Ryser | 4,727 |
Several problems that are NP-hard on general graphs are efficiently solvable on graphs with bounded treewidth. Efforts have been made to generalize treewidth and the related notion of pathwidth to digraphs. Directed treewidth, DAG-width and Kelly-width are some such notions which generalize treewidth, whereas directed pathwidth generalizes pathwidth. Each of these digraph width measures have an associated decomposition structure. In this paper, we present approximation algorithms for all these digraph width parameters. In particular, we give an O(sqrt{logn})-approximation algorithm for directed treewidth, and an O({\log}^{3/2}{n})-approximation algorithm for directed pathwidth, DAG-width and Kelly-width. Our algorithms construct the corresponding decompositions whose widths are within the above mentioned approximation factors. | Approximation Algorithms for Digraph Width Parameters | 4,728 |
Given a graph with edge costs, the {\em power} of a node is themaximum cost of an edge incident to it, and the power of a graph is the sum of the powers of its nodes. Motivated by applications in wireless networks, we consider the following fundamental problem in wireless network design. Given a graph $G=(V,E)$ with edge costs and degree bounds $\{r(v):v \in V\}$, the {\sf Minimum-Power Edge-Multi-Cover} ({\sf MPEMC}) problem is to find a minimum-power subgraph $J$ of $G$ such that the degree of every node $v$ in $J$ is at least $r(v)$. We give two approximation algorithms for {\sf MPEMC}, with ratios $O(\log k)$ and $k+1/2$, where $k=\max_{v \in V} r(v)$ is the maximum degree bound. This improves the previous ratios $O(\log n)$ and $k+1$, and implies ratios $O(\log k)$ for the {\sf Minimum-Power $k$-Outconnected Subgraph} and $O(\log k \log \frac{n}{n-k})$ for the {\sf Minimum-Power $k$-Connected Subgraph} problems; the latter is the currently best known ratio for the min-cost version of the problem. | Approximating minimum-power edge-multicovers | 4,729 |
Point-set embeddings and large-angle crossings are two areas of graph drawing that independently have received a lot of attention in the past few years. In this paper, we consider problems in the intersection of these two areas. Given the point-set-embedding scenario, we are interested in how much we gain in terms of computational complexity, curve complexity, and generality if we allow large-angle crossings as compared to the planar case. We investigate two drawing styles where only bends or both bends and edges must be drawn on an underlying grid. We present various results for drawings with one, two, and three bends per edge. | Drawing Graphs with Vertices at Specified Positions and Crossings at
Large Angles | 4,730 |
We show that, for any c>0, the (1+1) evolutionary algorithm using an arbitrary mutation rate p_n = c/n finds the optimum of a linear objective function over bit strings of length n in expected time Theta(n log n). Previously, this was only known for c at most 1. Since previous work also shows that universal drift functions cannot exist for c larger than a certain constant, we instead define drift functions which depend crucially on the relevant objective functions (and also on c itself). Using these carefully-constructed drift functions, we prove that the expected optimisation time is Theta(n log n). By giving an alternative proof of the multiplicative drift theorem, we also show that our optimisation-time bound holds with high probability. | Adaptive Drift Analysis | 4,731 |
We study the \emph{bounded-delay model} for Qualify-of-Service buffer management. Time is discrete. There is a buffer. Unit-length jobs (also called \emph{packets}) arrive at the buffer over time. Each packet has an integer release time, an integer deadline, and a positive real value. A packet's characteristics are not known to an online algorithm until the packet actually arrives. In each time step, at most one packet can be sent out of the buffer. The objective is to maximize the total value of the packets sent by their respective deadlines in an online manner. An online algorithm's performance is usually measured in terms of \emph{competitive ratio}, when this online algorithm is compared with a clairvoyant algorithm achieving the best total value. In this paper, we study a simple and intuitive online algorithm. We analyze its performance in terms of competitive ratio for the general model and a few important variants. | A Comprehensive Study of an Online Packet Scheduling Algorithm | 4,732 |
Let $\D = $$ \{d_1,d_2,...d_D\}$ be a given set of $D$ string documents of total length $n$, our task is to index $\D$, such that the $k$ most relevant documents for an online query pattern $P$ of length $p$ can be retrieved efficiently. We propose an index of size $|CSA|+n\log D(2+o(1))$ bits and $O(t_{s}(p)+k\log\log n+poly\log\log n)$ query time for the basic relevance metric \emph{term-frequency}, where $|CSA|$ is the size (in bits) of a compressed full text index of $\D$, with $O(t_s(p))$ time for searching a pattern of length $p$ . We further reduce the space to $|CSA|+n\log D(1+o(1))$ bits, however the query time will be $O(t_s(p)+k(\log \sigma \log\log n)^{1+\epsilon}+poly\log\log n)$, where $\sigma$ is the alphabet size and $\epsilon >0$ is any constant. | Towards an Optimal Space-and-Query-Time Index for Top-k Document
Retrieval | 4,733 |
In the SCHED problem we are given a set of n jobs, together with their processing times and precedence constraints. The task is to order the jobs so that their total completion time is minimized. SCHED is a special case of the Traveling Repairman Problem with precedences. A natural dynamic programming algorithm solves both these problems in 2^n n^O(1) time, and whether there exists an algorithms solving SCHED in O(c^n) time for some constant c < 2 was an open problem posted in 2004 by Woeginger. In this paper we answer this question positively. | Scheduling partially ordered jobs faster than 2^n | 4,734 |
One of the fundamental problem in the theory of sorting is to find the pessimistic number of comparisons sufficient to sort a given number of elements. Currently 16 is the lowest number of elements for which we do not know the exact value. We know that 46 comparisons suffices and that 44 do not. There is an open question if 45 comparisons are sufficient. We present an attempt to resolve that problem by performing an exhaustive computer search. We also present an algorithm for counting linear extensions which substantially speeds up computations. | Towards Optimal Sorting of 16 Elements | 4,735 |
In this paper we present an algorithm, called conauto-2.0, that can efficiently compute a set of generators of the automorphism group of a graph, and test whether two graphs are isomorphic, finding an isomorphism if they are. This algorithm uses the basic individualization/refinement technique, and is an improved version of the algorithm conauto, which has been shown to be very fast for random graphs and several families of hard graphs. In this paper, it is proved that, under some circumstances, it is not only possible to prune the search space (using already found generators of the automorphism group), but also to infer new generators without the need of explicitly finding an automorphism of the graph. This result is especially suited for graphs with regularly connected components, and can be applied in any isomorphism testing and canonical labeling algorithm (that use the individualization/refinement technique) to significantly improve its performance. Additionally, a dynamic target cell selection function is used to adapt to different graphs. The resulting algorithm preserves all the nice features of conauto, but reduces the time for testing graphs with regularly connected components and other hard graph families. We run extensive experiments, which show that the most popular algorithms (namely, nauty, bliss, Traces, and saucy) are slower than conauto-2.0, among others, for the graph families based on components. | Conauto-2.0: Fast Isomorphism Testing and Automorphism Group Computation | 4,736 |
The Travelling Salesman Problem is one the most fundamental and most studied problems in approximation algorithms. For more than 30 years, the best algorithm known for general metrics has been Christofides's algorithm with approximation factor of 3/2, even though the so-called Held-Karp LP relaxation of the problem is conjectured to have the integrality gap of only 4/3. Very recently, significant progress has been made for the important special case of graphic metrics, first by Oveis Gharan et al., and then by Momke and Svensson. In this paper, we provide an improved analysis for the approach introduced by Momke and Svensson yielding a bound of 13/9 on the approximation factor, as well as a bound of 19/12+epsilon for any epsilon>0 for a more general Travelling Salesman Path Problem in graphic metrics. | 13/9-approximation for Graphic TSP | 4,737 |
We focus on designing combinatorial algorithms for the Capacitated Network Design problem (Cap-SNDP). The Cap-SNDP is the problem of satisfying connectivity requirements when edges have costs and hard capacities. We begin by showing that the Group Steiner tree problem (GST) is a special case of Cap-SNDP even when there is connectivity requirement between only one source-sink pair. This implies the first poly-logarithmic lower bound for the Cap-SNDP. We next provide combinatorial algorithms for several special cases of this problem. The Cap-SNDP is equivalent to its special case when every edge has either zero cost or infinite capacity. We consider a special case, called Connected Cap-SNDP, where all infinite-capacity edges in the solution are required to form a connected component containing the sinks. This problem is motivated by its similarity to the Connected Facility Location problem [G+01,SW04]. We solve this problem by reducing it to Submodular tree cover problem, which is a common generalization of Connected Cap-SNDP and Group Steiner tree problem. We generalize the recursive greedy algorithm [CEK] achieving a poly-logarithmic approximation algorithm for Submodular tree cover problem. This result is interesting in its own right and gives the first poly-logarithmic approximation algorithms for Connected hard capacities set multi-cover and Connected source location. We then study another special case of Cap-SNDP called Unbalanced point-to-point connection problem. Besides its practical applications to shift design problems [EKS], it generalizes many problems such as k-MST, Steiner Forest and Point-to-Point Connection. We give a combinatorial logarithmic approximation algorithm for this problem by reducing it to degree-bounded SNDP. | Combinatorial Algorithms for Capacitated Network Design | 4,738 |
k-means has recently been recognized as one of the best algorithms for clustering unsupervised data. Since k-means depends mainly on distance calculation between all data points and the centers, the time cost will be high when the size of the dataset is large (for example more than 500millions of points). We propose a two stage algorithm to reduce the time cost of distance calculation for huge datasets. The first stage is a fast distance calculation using only a small portion of the data to produce the best possible location of the centers. The second stage is a slow distance calculation in which the initial centers used are taken from the first stage. The fast and slow stages represent the speed of the movement of the centers. In the slow stage, the whole dataset can be used to get the exact location of the centers. The time cost of the distance calculation for the fast stage is very low due to the small size of the training data chosen. The time cost of the distance calculation for the slow stage is also minimized due to small number of iterations. Different initial locations of the clusters have been used during the test of the proposed algorithms. For large datasets, experiments show that the 2-stage clustering method achieves better speed-up (1-9 times). | Fast k-means algorithm clustering | 4,739 |
We introduce a new regression problem which we call the Sum-Based Hierarchical Smoothing problem. Given a directed acyclic graph and a non-negative value, called target value, for each vertex in the graph, we wish to find non-negative values for the vertices satisfying a certain constraint while minimizing the distance of these assigned values and the target values in the lp-norm. The constraint is that the value assigned to each vertex should be no less than the sum of the values assigned to its children. We motivate this problem with applications in information retrieval and web mining. While our problem can be solved in polynomial time using linear programming, given the input size in these applications such a solution might be too slow. We mainly study the \ell_1-norm case restricting the underlying graphs to rooted trees. For this case we provide an efficient algorithm, running in O(n^2) time. While the algorithm is purely combinatorial, its proof of correctness is an elegant use of linear programming duality. We believe that our approach may be applicable to similar problems, where comparable hierarchical constraints are involved, e.g. considering the average of the values assigned to the children of each vertex. While similar in flavor to other smoothing problems like Isotonic Regression (see for example [Angelov et al. SODA'06]), our problem is arguably richer and theoretically more challenging. | Efficient Sum-Based Hierarchical Smoothing Under \ell_1-Norm | 4,740 |
We investigate the problem of succinctly representing an arbitrary permutation, \pi, on {0,...,n-1} so that \pi^k(i) can be computed quickly for any i and any (positive or negative) integer power k. A representation taking (1+\epsilon) n lg n + O(1) bits suffices to compute arbitrary powers in constant time, for any positive constant \epsilon <= 1. A representation taking the optimal \ceil{\lg n!} + o(n) bits can be used to compute arbitrary powers in O(lg n / lg lg n) time. We then consider the more general problem of succinctly representing an arbitrary function, f: [n] \rightarrow [n] so that f^k(i) can be computed quickly for any i and any integer power k. We give a representation that takes (1+\epsilon) n lg n + O(1) bits, for any positive constant \epsilon <= 1, and computes arbitrary positive powers in constant time. It can also be used to compute f^k(i), for any negative integer k, in optimal O(1+|f^k(i)|) time. We place emphasis on the redundancy, or the space beyond the information-theoretic lower bound that the data structure uses in order to support operations efficiently. A number of lower bounds have recently been shown on the redundancy of data structures. These lower bounds confirm the space-time optimality of some of our solutions. Furthermore, the redundancy of one of our structures "surpasses" a recent lower bound by Golynski [Golynski, SODA 2009], thus demonstrating the limitations of this lower bound. | Succinct Representations of Permutations and Functions | 4,741 |
We consider the problem of supporting Rank() and Select() operations on a bit vector of length m with n 1 bits. The problem is considered in the succinct index model, where the bit vector is stored in "read-only" memory and an additional data structure, called the index, is created during pre-processing to help answer the above queries. We give asymptotically optimal density-sensitive trade-offs, involving both m and n, that relate the size of the index to the number of accesses to the bit vector (and processing time) needed to answer the above queries. The results are particularly interesting for the case where n = o(m). | Optimal Indexes for Sparse Bit Vectors | 4,742 |
We study the problem of optimal traffic prediction and monitoring in large-scale networks. Our goal is to determine which subset of K links to monitor in order to "best" predict the traffic on the remaining links in the network. We consider several optimality criteria. This can be formulated as a combinatorial optimization problem, belonging to the family of subset selection problems. Similar NP-hard problems arise in statistics, machine learning and signal processing. Some include subset selection for regression, variable selection, and sparse approximation. Exact solutions are computationally prohibitive. We present both new heuristics as well as new efficient algorithms implementing the classical greedy heuristic - commonly used to tackle such combinatorial problems. Our approach exploits connections to principal component analysis (PCA), and yields new types of performance lower bounds which do not require submodularity of the objective functions. We show that an ensemble method applied to our new randomized heuristic algorithm, often outperforms the classical greedy heuristic in practice. We evaluate our algorithms under several large-scale networks, including real life networks. | Fast Approximation Algorithms for Near-optimal Large-scale Network
Monitoring | 4,743 |
In this paper, we present a polynomial dynamic programming algorithm that tests whether a $n$-vertex directed tree $T$ has an upward planar embedding into a convex point-set $S$ of size $n$. Further, we extend our approach to the class of outerplanar digraphs. This nontrivial and surprising result implies that any given digraph can be efficiently tested for an upward planar embedding into a given convex point set. | Upward Point Set Embeddability for Convex Point Sets is in $P$ | 4,744 |
We show that randomization can lead to significant improvements for a few fundamental problems in distributed tracking. Our basis is the {\em count-tracking} problem, where there are $k$ players, each holding a counter $n_i$ that gets incremented over time, and the goal is to track an $\eps$-approximation of their sum $n=\sum_i n_i$ continuously at all times, using minimum communication. While the deterministic communication complexity of the problem is $\Theta(k/\eps \cdot \log N)$, where $N$ is the final value of $n$ when the tracking finishes, we show that with randomization, the communication cost can be reduced to $\Theta(\sqrt{k}/\eps \cdot \log N)$. Our algorithm is simple and uses only O(1) space at each player, while the lower bound holds even assuming each player has infinite computing power. Then, we extend our techniques to two related distributed tracking problems: {\em frequency-tracking} and {\em rank-tracking}, and obtain similar improvements over previous deterministic algorithms. Both problems are of central importance in large data monitoring and analysis, and have been extensively studied in the literature. | Randomized Algorithms for Tracking Distributed Count, Frequencies, and
Ranks | 4,745 |
The topic of this paper is the presentation of a new network model designed for networks consisting of spatial objects. This model allows the development of more advance representations of systems of networked objects and the study of geographical phenomena propagated through networks. The capabilities of the model in simulation of geographical phenomena propagation are also studied and relevant algorithms are presented. As examples of use, modeling of water supply network and the simulation of traffic flow in road networks are presented. | Model for networks of spatial objects and simulation of geographical
phenomena propagation | 4,746 |
We revisit various string indexing problems with range reporting features, namely, position-restricted substring searching, indexing substrings with gaps, and indexing substrings with intervals. We obtain the following main results. {itemize} We give efficient reductions for each of the above problems to a new problem, which we call \emph{substring range reporting}. Hence, we unify the previous work by showing that we may restrict our attention to a single problem rather than studying each of the above problems individually. We show how to solve substring range reporting with optimal query time and little space. Combined with our reductions this leads to significantly improved time-space trade-offs for the above problems. In particular, for each problem we obtain the first solutions with optimal time query and $O(n\log^{O(1)} n)$ space, where $n$ is the length of the indexed string. We show that our techniques for substring range reporting generalize to \emph{substring range counting} and \emph{substring range emptiness} variants. We also obtain non-trivial time-space trade-offs for these problems. {itemize} Our bounds for substring range reporting are based on a novel combination of suffix trees and range reporting data structures. The reductions are simple and general and may apply to other combinations of string indexing with range reporting. | Substring Range Reporting | 4,747 |
In this paper, we propose the first exact algorithm for minimizing the difference of two submodular functions (D.S.), i.e., the discrete version of the D.C. programming problem. The developed algorithm is a branch-and-bound-based algorithm which responds to the structure of this problem through the relationship between submodularity and convexity. The D.S. programming problem covers a broad range of applications in machine learning because this generalizes the optimization of a wide class of set functions. We empirically investigate the performance of our algorithm, and illustrate the difference between exact and approximate solutions respectively obtained by the proposed and existing algorithms in feature selection and discriminative structure learning. | Prismatic Algorithm for Discrete D.C. Programming Problems | 4,748 |
Previous compact representations of permutations have focused on adding a small index on top of the plain data $<\pi(1), \pi(2),...\pi(n)>$, in order to efficiently support the application of the inverse or the iterated permutation. In this paper we initiate the study of techniques that exploit the compressibility of the data itself, while retaining efficient computation of $\pi(i)$ and its inverse. In particular, we focus on exploiting {\em runs}, which are subsets (contiguous or not) of the domain where the permutation is monotonic. Several variants of those types of runs arise in real applications such as inverted indexes and suffix arrays. Furthermore, our improved results on compressed data structures for permutations also yield better adaptive sorting algorithms. | On Compressing Permutations and Adaptive Sorting | 4,749 |
We consider the capacitated domination problem, which models a service-requirement assigning scenario and which is also a generalization of the dominating set problem. In this problem, we are given a graph with three parameters defined on the vertex set, which are cost, capacity, and demand. The objective of this problem is to compute a demand assignment of least cost, such that the demand of each vertex is fully-assigned to some of its closed neighbours without exceeding the amount of capacity they provide. In this paper, we provide the first constant factor approximation for this problem on planar graphs, based on a new perspective on the hierarchical structure of outer-planar graphs. We believe that this new perspective and technique can be applied to other capacitated covering problems to help tackle vertices of large degrees. | Capacitated Domination: Constant Factor Approximation for Planar Graphs | 4,750 |
We consider the problem of maximizing a monotone submodular function in a $k$-exchange system. These systems, introduced by Feldman et al., generalize the matroid k-parity problem in a wide class of matroids and capture many other combinatorial optimization problems. Feldman et al. show that a simple non-oblivious local search algorithm attains a $(k + 1)/2$ approximation ratio for the problem of linear maximization in a $k$-exchange system. Here, we extend this approach to the case of monotone submodular objective functions. We give a deterministic, non-oblivious local search algorithm that attains an approximation ratio of $(k + 3)/2$ for the problem of maximizing a monotone submodular function in a $k$-exchange system. | A $(k + 3)/2$-approximation algorithm for monotone submodular
maximization over a $k$-exchange system | 4,751 |
We studied the Fault-Tolerant Facility Placement problem (FTFP) which generalizes the uncapacitated facility location problem (UFL). In FTFP, we are given a set F of sites at which facilities can be built, and a set C of clients with some demands that need to be satisfied by different facilities. A client $j$ has demand $r_j$. Building one facility at a site $i$ incurs a cost $f_i$, and connecting one unit of demand from client $j$ to a facility at site $i\in\fac$ costs $d_{ij}$. $d_{ij}$'s are assumed to form a metric. A feasible solution specifies the number of facilities to be built at each site and the way to connect demands from clients to facilities, with the restriction that demands from the same client must go to different facilities. Facilities at the same site are considered different. The goal is to find a solution with minimum total cost. We gave a 1.7245-approximation algorithm to the FTFP problem. Our technique is via a reduction to the Fault-Tolerant Facility Location problem, in which each client has demand $r_j$ but each site can have at most one facility built. | New Results on the Fault-Tolerant Facility Placement Problem | 4,752 |
We present a framework for computing with input data specified by intervals, representing uncertainty in the values of the input parameters. To compute a solution, the algorithm can query the input parameters that yield more refined estimates in form of sub-intervals and the objective is to minimize the number of queries. The previous approaches address the scenario where every query returns an exact value. Our framework is more general as it can deal with a wider variety of inputs and query responses and we establish interesting relationships between them that have not been investigated previously. Although some of the approaches of the previous restricted models can be adapted to the more general model, we require more sophisticated techniques for the analysis and we also obtain improved algorithms for the previous model. We address selection problems in the generalized model and show that there exist 2-update competitive algorithms that do not depend on the lengths or distribution of the sub-intervals and hold against the worst case adversary. We also obtain similar bounds on the competitive ratio for the MST problem in graphs. | The update complexity of selection and related problems | 4,753 |
A binary matrix has the consecutive ones property (C1P) if it is possible to order the columns so that all 1s are consecutive in every row. In [McConnell, SODA 2004 768-777] the notion of incompatibility graph of a binary matrix was introduced and it was shown that odd cycles of this graph provide a certificate that a matrix does not have the consecutive ones property. A bound of (k+2) was claimed for the smallest odd cycle of a non-C1P matrix with k columns. In this note we show that this result can be obtained simply and directly via Tucker patterns, and that the correct bound is (k+2) when k is even, but (k+3) when k is odd. | A tight bound on the length of odd cycles in the incompatibility graph
of a non-C1P matrix | 4,754 |
In order to evaluate, compare, and tune graph algorithms, experiments on well designed benchmark sets have to be performed. Together with the goal of reproducibility of experimental results, this creates a demand for a public archive to gather and store graph instances. Such an archive would ideally allow annotation of instances or sets of graphs with additional information like graph properties and references to the respective experiments and results. Here we examine the requirements, and introduce a new community project with the aim of producing an easily accessible library of graphs. Through successful community involvement, it is expected that the archive will contain a representative selection of both real-world and generated graph instances, covering significant application areas as well as interesting classes of graphs. | The Open Graph Archive: A Community-Driven Effort | 4,755 |
We consider a class of pattern matching problems where a normalising transformation is applied at every alignment. Normalised pattern matching plays a key role in fields as diverse as image processing and musical information processing where application specific transformations are often applied to the input. By considering the class of polynomial transformations of the input, we provide fast algorithms and the first lower bounds for both new and old problems. Given a pattern of length m and a longer text of length n where both are assumed to contain integer values only, we first show O(n log m) time algorithms for pattern matching under linear transformations even when wildcard symbols can occur in the input. We then show how to extend the technique to polynomial transformations of arbitrary degree. Next we consider the problem of finding the minimum Hamming distance under polynomial transformation. We show that, for any epsilon>0, there cannot exist an O(n m^(1-epsilon)) time algorithm for additive and linear transformations conditional on the hardness of the classic 3SUM problem. Finally, we consider a version of the Hamming distance problem under additive transformations with a bound k on the maximum distance that need be reported. We give a deterministic O(nk log k) time solution which we then improve by careful use of randomisation to O(n sqrt(k log k) log n) time for sufficiently small k. Our randomised solution outputs the correct answer at every position with high probability. | Pattern Matching under Polynomial Transformation | 4,756 |
List Accessing Problem is a well studied research problem in the context of linear search. Input to the list accessing problem is an unsorted linear list of distinct elements along with a sequence of requests, where each request is an access operation on an element of the list. A list accessing algorithm reorganizes the list while processing a request sequence on the list in order to minimize the access cost. Move-To-Front algorithm has been proved to be the best performing list accessing online algorithm till date in the literature. Characterization of the input request sequences corresponding to practical real life situations is a big challenge for the list accessing problem. As far as our knowledge is concerned, no characterization for the request sequences has been done in the literature till date for the list accessing problem. In this paper, we have characterized the request sequences for the list accessing problem based on several factors such as size of the list, size of the request sequence, ordering of elements and frequency of occurrence of elements in the request sequence. We have made a comprehensive study of MTF list accessing algorithm and obtained new theoretical results for our characterized special class of request sequences. Our characterization will open up a new direction of research for empirical analysis of list accessing algorithms for real life inputs. | Characterization of Request Sequences for List Accessing Problem and New
Theoretical Results for MTF Algorithm | 4,757 |
There are many existing well known cost models for the list accessing problem. The standard cost model developed by Sleator and Tarjan is most widely used. In this paper, we have made a comprehensive study of the existing cost models and proposed a new cost model for the list accessing problem. In our proposed cost model, for calculating the processing cost of request sequence using a singly linked list, we consider the access cost, matching cost and replacement cost. The cost of processing a request sequence is the sum of access cost, matching cost and replacement cost. We have proposed a novel method for processing the request sequence which does not consider the rearrangement of the list and uses the concept of buffering, matching, look ahead and flag bit. | A New Proposed Cost Model for List Accessing Problem using Buffering | 4,758 |
Modern graphics processors provide exceptional computa- tional power, but only for certain computational models. While they have revolutionized computation in many fields, compression has been largely unnaffected. This paper aims to explain the current issues and possibili- ties in GPGPU compression. This is done by a high level overview of the GPGPU computational model in the context of compression algorithms; along with a more in-depth analysis of how one would implement bzip2 on a GPGPU architecture. | Lossless data compression on GPGPU architectures | 4,759 |
The Integer Programming Problem (IP) for a polytope P \subseteq R^n is to find an integer point in P or decide that P is integer free. We give an algorithm for an approximate version of this problem, which correctly decides whether P contains an integer point or whether a (1+\eps) scaling of P around its barycenter is integer free in time O(1/\eps^2)^n. We reduce this approximate IP question to an approximate Closest Vector Problem (CVP) in a "near-symmetric" semi-norm, which we solve via a sieving technique first developed by Ajtai, Kumar, and Sivakumar (STOC 2001). Our main technical contribution is an extension of the AKS sieving technique which works for any near-symmetric semi-norm. Our results also extend to general convex bodies and lattices. | A O(1/eps^2)^n Time Sieving Algorithm for Approximate Integer
Programming | 4,760 |
Distance oracles are data structures that provide fast (possibly approximate) answers to shortest-path and distance queries in graphs. The tradeoff between the space requirements and the query time of distance oracles is of particular interest and the main focus of this paper. In FOCS'01, Thorup introduced approximate distance oracles for planar graphs. He proved that, for any eps>0 and for any planar graph on n nodes, there exists a (1+eps)-approximate distance oracle using space O(n eps^{-1} log n) such that approximate distance queries can be answered in time O(1/eps). Ten years later, we give the first improvements on the space-querytime tradeoff for planar graphs. * We give the first oracle having a space-time product with subquadratic dependency on 1/eps. For space ~O(n log n) we obtain query time ~O(1/eps) (assuming polynomial edge weights). The space shows a doubly logarithmic dependency on 1/eps only. We believe that the dependency on eps may be almost optimal. * For the case of moderate edge weights (average bounded by polylog(n), which appears to be the case for many real-world road networks), we hit a "sweet spot," improving upon Thorup's oracle both in terms of eps and n. Our oracle uses space ~O(n log log n) and it has query time ~O(log log log n + 1/eps). (Asymptotic notation in this abstract hides low-degree polynomials in log(1/eps) and log*(n).) | More Compact Oracles for Approximate Distances in Planar Graphs | 4,761 |
We consider the \emph{two-dimensional range maximum query (2D-RMQ)} problem: given an array $A$ of ordered values, to pre-process it so that we can find the position of the smallest element in the sub-matrix defined by a (user-specified) range of rows and range of columns. We focus on determining the \emph{effective} entropy of 2D-RMQ, i.e., how many bits are needed to encode $A$ so that 2D-RMQ queries can be answered \emph{without} access to $A$. We give tight upper and lower bounds on the expected effective entropy for the case when $A$ contains independent identically-distributed random values, and new upper and lower bounds for arbitrary $A$, for the case when $A$ contains few rows. The latter results improve upon previous upper and lower bounds by Brodal et al. (ESA 2010). In some cases we also give data structures whose space usage is close to the effective entropy and answer 2D-RMQ queries rapidly. | Encoding 2-D Range Maximum Queries | 4,762 |
Motivated by the imminent growth of massive, highly redundant genomic databases, we study the problem of compressing a string database while simultaneously supporting fast random access, substring extraction and pattern matching to the underlying string(s). Bille et al. (2011) recently showed how, given a straight-line program with $r$ rules for a string $s$ of length $n$, we can build an $\Oh{r}$-word data structure that allows us to extract any substring of length $m$ in $\Oh{\log n + m}$ time. They also showed how, given a pattern $p$ of length $m$ and an edit distance (k \leq m), their data structure supports finding all \occ approximate matches to $p$ in $s$ in $\Oh{r (\min (m k, k^4 + m) + \log n) + \occ}$ time. Rytter (2003) and Charikar et al. (2005) showed that $r$ is always at least the number $z$ of phrases in the LZ77 parse of $s$, and gave algorithms for building straight-line programs with $\Oh{z \log n}$ rules. In this paper we give a simple $\Oh{z \log n}$-word data structure that takes the same time for substring extraction but only $\Oh{z \min (m k, k^4 + m) + \occ}$ time for approximate pattern matching. | Faster Approximate Pattern Matching in Compressed Repetitive Texts | 4,763 |
Consider an undirected weighted graph G=(V,E) with |V|=n and |E|=m, where each vertex v is assigned a label from a set L of \ell labels. We show how to construct a compact distance oracle that can answer queries of the form: "what is the distance from v to the closest lambda-labeled node" for a given node v in V and label lambda in L. This problem was introduced by Hermelin, Levy, Weimann and Yuster [ICALP 2011] where they present several results for this problem. In the first result, they show how to construct a vertex-label distance oracle of expected size O(kn^{1+1/k}) with stretch (4k - 5) and query time O(k). In a second result, they show how to reduce the size of the data structure to O(kn \ell^{1/k}) at the expense of a huge stretch, the stretch of this construction grows exponentially in k, (2^k-1). In the third result they present a dynamic vertex-label distance oracle that is capable of handling label changes in a sub-linear time. The stretch of this construction is also exponential in k, (2 3^{k-1}+1). We manage to significantly improve the stretch of their constructions, reducing the dependence on k from exponential to polynomial (4k-5), without requiring any tradeoff regarding any of the other variables. In addition, we introduce the notion of vertex-label spanners: subgraphs that preserve distances between every node v and label lambda. We present an efficient construction for vertex-label spanners with stretch-size tradeoff close to optimal. | Improved Distance Oracles and Spanners for Vertex-Labeled Graphs | 4,764 |
We present a packing-based approximation algorithm for the $k$-Set Cover problem. We introduce a new local search-based $k$-set packing heuristic, and call it Restricted $k$-Set Packing. We analyze its tight approximation ratio via a complicated combinatorial argument. Equipped with the Restricted $k$-Set Packing algorithm, our $k$-Set Cover algorithm is composed of the $k$-Set Packing heuristic \cite{schrijver} for $k\geq 7$, Restricted $k$-Set Packing for $k=6,5,4$ and the semi-local $(2,1)$-improvement \cite{furer} for 3-Set Cover. We show that our algorithm obtains a tight approximation ratio of $H_k-0.6402+\Theta(\frac{1}{k})$, where $H_k$ is the $k$-th harmonic number. For small $k$, our results are 1.8667 for $k=6$, 1.7333 for $k=5$ and 1.5208 for $k=4$. Our algorithm improves the currently best approximation ratio for the $k$-Set Cover problem of any $k\geq 4$. | Packing-Based Approximation Algorithm for the k-Set Cover Problem | 4,765 |
We consider the Generalized Bin Covering (GBC) problem: We are given $m$ bin types, where each bin of type $i$ has profit $p_i$ and demand $d_i$. Furthermore, there are $n$ items, where item $j$ has size $s_j$. A bin of type $i$ is covered if the set of items assigned to it has total size at least the demand $d_i$. In that case, the profit of $p_i$ is earned and the objective is to maximize the total profit. To the best of our knowledge, only the cases $p_i = d_i = 1$ (Bin Covering) and $p_i = d_i$ (Variable-Sized Bin Covering (VSBC)) have been treated before. We study two models of bin supply: In the unit supply model, we have exactly one bin of each type, i.\,e., we have individual bins. By contrast, in the infinite supply model, we have arbitrarily many bins of each type. Clearly, the unit supply model is a generalization of the infinite supply model. To the best of our knowledge the unit supply model has not been studied yet. Our results for the unit supply model hold not only asymptotically, but for all instances. This contrasts most of the previous work on \prob{Bin Covering}. We prove that there is a combinatorial 5-approximation algorithm for GBC with unit supply, which has running time $\bigO{nm\sqrt{m+n}}$. Furthermore, for VSBC we show that the natural and fast Next Fit Decreasing ($\NFD$) algorithm is a 9/4-approximation in the unit supply model. The bound is tight for the algorithm and close to being best-possible. We show that there is an AFPTAS for VSBC in the \emph{infinite} supply model. | Approximation Algorithms for Variable-Sized and Generalized Bin Covering | 4,766 |
To store and search genomic databases efficiently, researchers have recently started building compressed self-indexes based on grammars. In this paper we show how, given a straight-line program with $r$ rules for a string (S [1..n]) whose LZ77 parse consists of $z$ phrases, we can store a self-index for $S$ in $\Oh{r + z \log \log n}$ space such that, given a pattern (P [1..m]), we can list the $\occ$ occurrences of $P$ in $S$ in $\Oh{m^2 + \occ \log \log n}$ time. If the straight-line program is balanced and we accept a small probability of building a faulty index, then we can reduce the $\Oh{m^2}$ term to $\Oh{m \log m}$. All previous self-indexes are larger or slower in the worst case. | A Faster Grammar-Based Self-Index | 4,767 |
We consider a natural generalization of the classical pattern matching problem: given compressed representations of a pattern p[1..M] and a text t[1..N] of sizes m and n, respectively, does p occur in t? We develop an optimal linear time solution for the case when both p and t are compressed using the LZW method. This improves the previously known O((n+m)log(n+m)) time solution of Gasieniec and Rytter, and essentially closes the line of research devoted to studying LZW-compressed exact pattern matching. | Tying up the loose ends in fully LZW-compressed pattern matching | 4,768 |
We revisit the range minimum query problem and present a new O(n)-space data structure that supports queries in O(1) time. Although previous data structures exist whose asymptotic bounds match ours, our goal is to introduce a new solution that is simple, intuitive, and practical without increasing costs for query time or space. | A Simple Linear-Space Data Structure for Constant-Time Range Minimum
Query | 4,769 |
We study the problem of parameterized matching in a stream where we want to output matches between a pattern of length m and the last m symbols of the stream before the next symbol arrives. Parameterized matching is a natural generalisation of exact matching where an arbitrary one-to-one relabelling of pattern symbols is allowed. We show how this problem can be solved in constant time per arriving stream symbol and sublinear, near optimal space with high probability. Our results are surprising and important: it has been shown that almost no streaming pattern matching problems can be solved (not even randomised) in less than Theta(m) space, with exact matching as the only known problem to have a sublinear, near optimal space solution. Here we demonstrate that a similar sublinear, near optimal space solution is achievable for an even more challenging problem. The proof is considerably more complex than that for exact matching. | Parameterized Matching in the Streaming Model | 4,770 |
In Online Sum-Radii Clustering, n demand points arrive online and must be irrevocably assigned to a cluster upon arrival. The cost of each cluster is the sum of a fixed opening cost and its radius, and the objective is to minimize the total cost of the clusters opened by the algorithm. We show that the deterministic competitive ratio of Online Sum-Radii Clustering for general metric spaces is \Theta(\log n), where the upper bound follows from a primal-dual algorithm and holds for general metric spaces, and the lower bound is valid for ternary Hierarchically Well-Separated Trees (HSTs) and for the Euclidean plane. Combined with the results of (Csirik et al., MFCS 2010), this result demonstrates that the deterministic competitive ratio of Online Sum-Radii Clustering changes abruptly, from constant to logarithmic, when we move from the line to the plane. We also show that Online Sum-Radii Clustering in metric spaces induced by HSTs is closely related to the Parking Permit problem introduced by (Meyerson, FOCS 2005). Exploiting the relation to Parking Permit, we obtain a lower bound of \Omega(\log\log n) on the randomized competitive ratio of Online Sum-Radii Clustering in tree metrics. Moreover, we present a simple randomized O(\log n)-competitive algorithm, and a deterministic O(\log\log n)-competitive algorithm for the fractional version of the problem. | Online Sum-Radii Clustering | 4,771 |
We show how to compute the edit distance between two strings of length n up to a factor of 2^{\~O(sqrt(log n))} in n^(1+o(1)) time. This is the first sub-polynomial approximation algorithm for this problem that runs in near-linear time, improving on the state-of-the-art n^(1/3+o(1)) approximation. Previously, approximation of 2^{\~O(sqrt(log n))} was known only for embedding edit distance into l_1, and it is not known if that embedding can be computed in less than quadratic time. | Approximating Edit Distance in Near-Linear Time | 4,772 |
We reinterpret some online greedy algorithms for a class of nonlinear "load-balancing" problems as solving a mathematical program online. For example, we consider the problem of assigning jobs to (unrelated) machines to minimize the sum of the alpha^{th}-powers of the loads plus assignment costs (the online Generalized Assignment Problem); or choosing paths to connect terminal pairs to minimize the alpha^{th}-powers of the edge loads (online routing with speed-scalable routers). We give analyses of these online algorithms using the dual of the primal program as a lower bound for the optimal algorithm, much in the spirit of online primal-dual results for linear problems. We then observe that a wide class of uni-processor speed scaling problems (with essentially arbitrary scheduling objectives) can be viewed as such load balancing problems with linear assignment costs. This connection gives new algorithms for problems that had resisted solutions using the dominant potential function approaches used in the speed scaling literature, as well as alternate, cleaner proofs for other known results. | Online Primal-Dual For Non-linear Optimization with Applications to
Speed Scaling | 4,773 |
Recently Rubinfeld et al. (ICS 2011, pp. 223--238) proposed a new model of sublinear algorithms called \emph{local computation algorithms}. In this model, a computation problem $F$ may have more than one legal solution and each of them consists of many bits. The local computation algorithm for $F$ should answer in an online fashion, for any index $i$, the $i^{\mathrm{th}}$ bit of some legal solution of $F$. Further, all the answers given by the algorithm should be consistent with at least one solution of $F$. In this work, we continue the study of local computation algorithms. In particular, we develop a technique which under certain conditions can be applied to construct local computation algorithms that run not only in polylogarithmic time but also in polylogarithmic \emph{space}. Moreover, these local computation algorithms are easily parallelizable and can answer all parallel queries consistently. Our main technical tools are pseudorandom numbers with bounded independence and the theory of branching processes. | Space-efficient Local Computation Algorithms | 4,774 |
The eviction problem for memory hierarchies is studied for the Hidden Markov Reference Model (HMRM) of the memory trace, showing how miss minimization can be naturally formulated in the optimal control setting. In addition to the traditional version assuming a buffer of fixed capacity, a relaxed version is also considered, in which buffer occupancy can vary and its average is constrained. Resorting to multiobjective optimization, viewing occupancy as a cost rather than as a constraint, the optimal eviction policy is obtained by composing solutions for the individual addressable items. This approach is then specialized to the Least Recently Used Stack Model (LRUSM), a type of HMRM often considered for traces, which includes V-1 parameters, where V is the size of the virtual space. A gain optimal policy for any target average occupancy is obtained which (i) is computable in time O(V) from the model parameters, (ii) is optimal also for the fixed capacity case, and (iii) is characterized in terms of priorities, with the name of Least Profit Rate (LPR) policy. An O(log C) upper bound (being C the buffer capacity) is derived for the ratio between the expected miss rate of LPR and that of OPT, the optimal off-line policy; the upper bound is tightened to O(1), under reasonable constraints on the LRUSM parameters. Using the stack-distance framework, an algorithm is developed to compute the number of misses incurred by LPR on a given input trace, simultaneously for all buffer capacities, in time O(log V) per access. Finally, some results are provided for miss minimization over a finite horizon and over an infinite horizon under bias optimality, a criterion more stringent than gain optimality. | Optimal Eviction Policies for Stochastic Address Traces | 4,775 |
A 2.75-approximation algorithm is proposed for the unconstrained traveling tournament problem, which is a variant of the traveling tournament problem. For the unconstrained traveling tournament problem, this is the first proposal of an approximation algorithm with a constant approximation ratio. In addition, the proposed algorithm yields a solution that meets both the no-repeater and mirrored constraints. Computational experiments show that the algorithm generates solutions of good quality. | A 2.75-Approximation Algorithm for the Unconstrained Traveling
Tournament Problem | 4,776 |
In the query-commit problem we are given a graph where edges have distinct probabilities of existing. It is possible to query the edges of the graph, and if the queried edge exists then its endpoints are irrevocably matched. The goal is to find a querying strategy which maximizes the expected size of the matching obtained. This stochastic matching setup is motivated by applications in kidney exchanges and online dating. In this paper we address the query-commit problem from both theoretical and experimental perspectives. First, we show that a simple class of edges can be queried without compromising the optimality of the strategy. This property is then used to obtain in polynomial time an optimal querying strategy when the input graph is sparse. Next we turn our attentions to the kidney exchange application, focusing on instances modeled over real data from existing exchange programs. We prove that, as the number of nodes grows, almost every instance admits a strategy which matches almost all nodes. This result supports the intuition that more exchanges are possible on a larger pool of patient/donors and gives theoretical justification for unifying the existing exchange programs. Finally, we evaluate experimentally different querying strategies over kidney exchange instances. We show that even very simple heuristics perform fairly well, being within 1.5% of an optimal clairvoyant strategy, that knows in advance the edges in the graph. In such a time-sensitive application, this result motivates the use of committing strategies. | The Query-commit Problem | 4,777 |
This work is concerned with approximating constraint satisfaction problems (CSPs) with an additional global cardinality constraints. For example, \maxcut is a boolean CSP where the input is a graph $G = (V,E)$ and the goal is to find a cut $S \cup \bar S = V$ that maximizes the numberof crossing edges, $|E(S,\bar S)|$. The \maxbisection problem is a variant of \maxcut with an additional global constraint that each side of the cut has exactly half the vertices, i.e., $|S| = |V|/2$. Several other natural optimization problems like \minbisection and approximating Graph Expansion can be formulated as CSPs with global constraints. In this work, we formulate a general approach towards approximating CSPs with global constraints using SDP hierarchies. To demonstrate the approach we present the following results: Using the Lasserre hierarchy, we present an algorithm that runs in time $O(n^{poly(1/\epsilon)})$ that given an instance of \maxbisection with value $1-\epsilon$, finds a bisection with value $1-O(\sqrt{\epsilon})$. This approximation is near-optimal (up to constant factors in $O()$) under the Unique Games Conjecture. By a computer-assisted proof, we show that the same algorithm also achieves a 0.85-approximation for \maxbisection, improving on the previous bound of 0.70 (note that it is \uniquegames hard to approximate better than a 0.878 factor). The same algorithm also yields a 0.92-approximation for \maxtwosat with cardinality constraints. For every CSP with a global cardinality constraints, we present a generic conversion from integrality gap instances for the Lasserre hierarchy to a {\it dictatorship test} whose soundness is at most integrality gap. Dictatorship testing gadgets are central to hardness results for CSPs, and a generic conversion of the above nature lies at the core of the tight Unique Games based hardness result for CSPs. \cite{Raghavendra08} | Approximating CSPs with Global Cardinality Constraints Using SDP
Hierarchies | 4,778 |
We give a nearly optimal sublinear-time algorithm for approximating the size of a minimum vertex cover in a graph G. The algorithm may query the degree deg(v) of any vertex v of its choice, and for each 1 <= i <= deg(v), it may ask for the i-th neighbor of v. Letting VC_opt(G) denote the minimum size of vertex cover in G, the algorithm outputs, with high constant success probability, an estimate VC_estimate(G) such that VC_opt(G) <= VC_estimate(G) <= 2 * VC_opt(G) + epsilon*n, where epsilon is a given additive approximation parameter. We refer to such an estimate as a (2,epsilon)-estimate. The query complexity and running time of the algorithm are ~O(avg_deg * poly(1/epsilon)), where avg_deg denotes the average vertex degree in the graph. The best previously known sublinear algorithm, of Yoshida et al. (STOC 2009), has query complexity and running time O(d^4/epsilon^2), where d is the maximum degree in the graph. Given the lower bound of Omega(avg_deg) (for constant epsilon) for obtaining such an estimate (with any constant multiplicative factor) due to Parnas and Ron (TCS 2007), our result is nearly optimal. In the case that the graph is dense, that is, the number of edges is Theta(n^2), we consider another model, in which the algorithm may ask, for any pair of vertices u and v, whether there is an edge between u and v. We show how to adapt the algorithm that uses neighbor queries to this model and obtain an algorithm that outputs a (2,epsilon)-estimate of the size of a minimum vertex cover whose query complexity and running time are ~O(n) * poly(1/epsilon). | A Near-Optimal Sublinear-Time Algorithm for Approximating the Minimum
Vertex Cover Size | 4,779 |
We consider an online preemptive scheduling problem where jobs with deadlines arrive sporadically. A commitment requirement is imposed such that the scheduler has to either accept or decline a job immediately upon arrival. The scheduler's decision to accept an arriving job constitutes a contract with the customer; if the accepted job is not completed by its deadline as promised, the scheduler loses the value of the corresponding job and has to pay an additional penalty depending on the amount of unfinished workload. The objective of the online scheduler is to maximize the overall profit, i.e., the total value of the admitted jobs completed before their deadlines less the penalty paid for the admitted jobs that miss their deadlines. We show that the maximum competitive ratio is $3-2\sqrt{2}$ and propose a simple online algorithm to achieve this competitive ratio. The optimal scheduling includes a threshold admission and a greedy scheduling policies. The proposed algorithm has direct applications to the charging of plug-in hybrid electrical vehicles (PHEV) at garages or parking lots. | Optimal Deadline Scheduling with Commitment | 4,780 |
In a software watermarking environment, several graph theoretic watermark methods use numbers as watermark values, where some of these methods encode the watermark numbers as graph structures. In this paper we extended the class of error correcting graphs by proposing an efficient and easily implemented codec system for encoding watermark numbers as reducible permutation flow-graphs. More precisely, we first present an efficient algorithm which encodes a watermark number $w$ as self-inverting permutation $\pi^*$ and, then, an algorithm which encodes the self-inverting permutation $\pi^*$ as a reducible permutation flow-graph $F[\pi^*]$ by exploiting domination relations on the elements of $\pi^*$ and using an efficient DAG representation of $\pi^*$. The whole encoding process takes O(n) time and space, where $n$ is the binary size of the number $w$ or, equivalently, the number of elements of the permutation $\pi^*$. We also propose efficient decoding algorithms which extract the number $w$ from the reducible permutation flow-graph $F[\pi^*]$ within the same time and space complexity. The two main components of our proposed codec system, i.e., the self-inverting permutation $\pi^*$ and the reducible permutation graph $F[\pi^*]$, incorporate important structural properties which make our system resilient to attacks. | Efficient Encoding of Watermark Numbers as Reducible Permutation Graphs | 4,781 |
We give the first polylogarithmic-competitive randomized online algorithm for the $k$-server problem on an arbitrary finite metric space. In particular, our algorithm achieves a competitive ratio of O(log^3 n log^2 k log log n) for any metric space on n points. Our algorithm improves upon the deterministic (2k-1)-competitive algorithm of Koutsoupias and Papadimitriou [J.ACM'95] whenever n is sub-exponential in k. | A Polylogarithmic-Competitive Algorithm for the k-Server Problem | 4,782 |
We show that there exist linear-time algorithms that compute the strong chromatic index and a maximum induced matching of tree-cographs when the decomposition tree is a part of the input. We also show that there exists an efficient algorithm for the strong chromatic index of permutation graphs. | On the strong chromatic index and maximum induced matching of
tree-cographs and permutation graphs | 4,783 |
We study the Minimum Latency Submodular Cover problem (MLSC), which consists of a metric $(V,d)$ with source $r\in V$ and $m$ monotone submodular functions $f_1, f_2, ..., f_m: 2^V \rightarrow [0,1]$. The goal is to find a path originating at $r$ that minimizes the total cover time of all functions. This generalizes well-studied problems, such as Submodular Ranking [AzarG11] and Group Steiner Tree [GKR00]. We give a polynomial time $O(\log \frac{1}{\eps} \cdot \log^{2+\delta} |V|)$-approximation algorithm for MLSC, where $\epsilon>0$ is the smallest non-zero marginal increase of any $\{f_i\}_{i=1}^m$ and $\delta>0$ is any constant. We also consider the Latency Covering Steiner Tree problem (LCST), which is the special case of \mlsc where the $f_i$s are multi-coverage functions. This is a common generalization of the Latency Group Steiner Tree [GuptaNR10a,ChakrabartyS11] and Generalized Min-sum Set Cover [AzarGY09, BansalGK10] problems. We obtain an $O(\log^2|V|)$-approximation algorithm for LCST. Finally we study a natural stochastic extension of the Submodular Ranking problem, and obtain an adaptive algorithm with an $O(\log 1/ \eps)$ approximation ratio, which is best possible. This result also generalizes some previously studied stochastic optimization problems, such as Stochastic Set Cover [GoemansV06] and Shared Filter Evaluation [MunagalaSW07, LiuPRY08]. | Minimum Latency Submodular Cover | 4,784 |
We consider string matching with variable length gaps. Given a string $T$ and a pattern $P$ consisting of strings separated by variable length gaps (arbitrary strings of length in a specified range), the problem is to find all ending positions of substrings in $T$ that match $P$. This problem is a basic primitive in computational biology applications. Let $m$ and $n$ be the lengths of $P$ and $T$, respectively, and let $k$ be the number of strings in $P$. We present a new algorithm achieving time $O(n\log k + m +\alpha)$ and space $O(m + A)$, where $A$ is the sum of the lower bounds of the lengths of the gaps in $P$ and $\alpha$ is the total number of occurrences of the strings in $P$ within $T$. Compared to the previous results this bound essentially achieves the best known time and space complexities simultaneously. Consequently, our algorithm obtains the best known bounds for almost all combinations of $m$, $n$, $k$, $A$, and $\alpha$. Our algorithm is surprisingly simple and straightforward to implement. We also present algorithms for finding and encoding the positions of all strings in $P$ for every match of the pattern. | String Matching with Variable Length Gaps | 4,785 |
We consider the problem of distinguishing between two arbitrary black-box distributions defined over the domain [n], given access to $s$ samples from both. It is known that in the worst case O(n^{2/3}) samples is both necessary and sufficient, provided that the distributions have L1 difference of at least {\epsilon}. However, it is also known that in many cases fewer samples suffice. We identify a new parameter, that provides an upper bound on how many samples needed, and present an efficient algorithm that requires the number of samples independent of the domain size. Also for a large subclass of distributions we provide a lower bound, that matches our upper bound up to a poly-logarithmic factor. | Telling Two Distributions Apart: a Tight Characterization | 4,786 |
Consider an input text string T[1,N] drawn from an unbounded alphabet. We study partial computation in suffix-based problems for Data Compression and Text Indexing such as (I) retrieve any segment of K<=N consecutive symbols from the Burrows-Wheeler transform of T, and (II) retrieve any chunk of K<=N consecutive entries of the Suffix Array or the Suffix Tree. Prior literature would take O(N log N) comparisons (and time) to solve these problems by solving the total problem of building the entire Burrows-Wheeler transform or Text Index for T, and performing a post-processing to single out the wanted portion. We introduce a novel adaptive approach to partial computational problems above, and solve both the partial problems in O(K log K + N) comparisons and time, improving the best known running times of O(N log N) for K=o(N). These partial-computation problems are intimately related since they share a common bottleneck: the suffix multi-selection problem, which is to output the suffixes of rank r_1,r_2,...,r_K under the lexicographic order, where r_1<r_2<...<r_K, r_i in [1,N]. Special cases of this problem are well known: K=N is the suffix sorting problem that is the workhorse in Stringology with hundreds of applications, and K=1 is the recently studied suffix selection. We show that suffix multi-selection can be solved in Theta(N log N - sum_{j=0}^K Delta_j log Delta_j+N) time and comparisons, where r_0=0, r_{K+1}=N+1, and Delta_j=r_{j+1}-r_j for 0<=j<=K. This is asymptotically optimal, and also matches the bound in [Dobkin, Munro, JACM 28(3)] for multi-selection on atomic elements (not suffixes). Matching the bound known for atomic elements for strings is a long running theme and challenge from 70's, which we achieve for the suffix multi-selection problem. The partial suffix problems as well as the suffix multi-selection problem have many applications. | Partial Data Compression and Text Indexing via Optimal Suffix
Multi-Selection | 4,787 |
The goal of (stable) sparse recovery is to recover a $k$-sparse approximation $x*$ of a vector $x$ from linear measurements of $x$. Specifically, the goal is to recover $x*$ such that ||x-x*||_p <= C min_{k-sparse x'} ||x-x'||_q for some constant $C$ and norm parameters $p$ and $q$. It is known that, for $p=q=1$ or $p=q=2$, this task can be accomplished using $m=O(k \log (n/k))$ non-adaptive measurements [CRT06] and that this bound is tight [DIPW10,FPRU10,PW11]. In this paper we show that if one is allowed to perform measurements that are adaptive, then the number of measurements can be considerably reduced. Specifically, for $C=1+eps$ and $p=q=2$ we show - A scheme with $m=O((1/eps)k log log (n eps/k))$ measurements that uses $O(log* k \log \log (n eps/k))$ rounds. This is a significant improvement over the best possible non-adaptive bound. - A scheme with $m=O((1/eps) k log (k/eps) + k \log (n/k))$ measurements that uses /two/ rounds. This improves over the best possible non-adaptive bound. To the best of our knowledge, these are the first results of this type. As an independent application, we show how to solve the problem of finding a duplicate in a data stream of $n$ items drawn from ${1, 2, ..., n-1}$ using $O(log n)$ bits of space and $O(log log n)$ passes, improving over the best possible space complexity achievable using a single pass. | On the Power of Adaptivity in Sparse Recovery | 4,788 |
We consider network design problems for information networks where routers can replicate data but cannot alter it. This functionality allows the network to eliminate data-redundancy in traffic, thereby saving on routing costs. We consider two problems within this framework and design approximation algorithms. The first problem we study is the traffic-redundancy aware network design (RAND) problem. We are given a weighted graph over a single server and many clients. The server owns a number of different data packets and each client desires a subset of the packets; the client demand sets form a laminar set system. Our goal is to connect every client to the source via a single path, such that the collective cost of the resulting network is minimized. Here the transportation cost over an edge is its weight times times the number of distinct packets that it carries. The second problem is a facility location problem that we call RAFL. Here the goal is to find an assignment from clients to facilities such that the total cost of routing packets from the facilities to clients (along unshared paths), plus the total cost of "producing" one copy of each desired packet at each facility is minimized. We present a constant factor approximation for the RAFL and an O(log P) approximation for RAND, where P is the total number of distinct packets. We remark that P is always at most the number of different demand sets desired or the number of clients, and is generally much smaller. | Traffic-Redundancy Aware Network Design | 4,789 |
We study graph partitioning problems from a min-max perspective, in which an input graph on n vertices should be partitioned into k parts, and the objective is to minimize the maximum number of edges leaving a single part. The two main versions we consider are where the k parts need to be of equal-size, and where they must separate a set of k given terminals. We consider a common generalization of these two problems, and design for it an $O(\sqrt{\log n\log k})$-approximation algorithm. This improves over an $O(\log^2 n)$ approximation for the second version, and roughly $O(k\log n)$ approximation for the first version that follows from other previous work. We also give an improved O(1)-approximation algorithm for graphs that exclude any fixed minor. Our algorithm uses a new procedure for solving the Small-Set Expansion problem. In this problem, we are given a graph G and the goal is to find a non-empty set $S\subseteq V$ of size $|S| \leq \rho n$ with minimum edge-expansion. We give an $O(\sqrt{\log{n}\log{(1/\rho)}})$ bicriteria approximation algorithm for the general case of Small-Set Expansion, and O(1) approximation algorithm for graphs that exclude any fixed minor. | Min-Max Graph Partitioning and Small Set Expansion | 4,790 |
We present a new algorithm for computing $m$-th roots over the finite field $\F_q$, where $q = p^n$, with $p$ a prime, and $m$ any positive integer. In the particular case $m=2$, the cost of the new algorithm is an expected $O(\M(n)\log (p) + \CC(n)\log(n))$ operations in $\F_p$, where $\M(n)$ and $\CC(n)$ are bounds for the cost of polynomial multiplication and modular polynomial composition. Known results give $\M(n) = O(n\log (n) \log\log (n))$ and $\CC(n) = O(n^{1.67})$, so our algorithm is subquadratic in $n$. | Taking Roots over High Extensions of Finite Fields | 4,791 |
We introduce the first grammar-compressed representation of a sequence that supports searches in time that depends only logarithmically on the size of the grammar. Given a text $T[1..u]$ that is represented by a (context-free) grammar of $n$ (terminal and nonterminal) symbols and size $N$ (measured as the sum of the lengths of the right hands of the rules), a basic grammar-based representation of $T$ takes $N\lg n$ bits of space. Our representation requires $2N\lg n + N\lg u + \epsilon\, n\lg n + o(N\lg n)$ bits of space, for any $0<\epsilon \le 1$. It can find the positions of the $occ$ occurrences of a pattern of length $m$ in $T$ in $O((m^2/\epsilon)\lg (\frac{\lg u}{\lg n}) +occ\lg n)$ time, and extract any substring of length $\ell$ of $T$ in time $O(\ell+h\lg(N/h))$, where $h$ is the height of the grammar tree. | Improved Grammar-Based Compressed Indexes | 4,792 |
We present a deterministic (1+sqrt(5))/2-approximation algorithm for the s-t path TSP for an arbitrary metric. Given a symmetric metric cost on n vertices including two prespecified endpoints, the problem is to find a shortest Hamiltonian path between the two endpoints; Hoogeveen showed that the natural variant of Christofides' algorithm is a 5/3-approximation algorithm for this problem, and this asymptotically tight bound in fact has been the best approximation ratio known until now. We modify this algorithm so that it chooses the initial spanning tree based on an optimal solution to the Held-Karp relaxation rather than a minimum spanning tree; we prove this simple but crucial modification leads to an improved approximation ratio, surpassing the 20-year-old barrier set by the natural Christofides' algorithm variant. Our algorithm also proves an upper bound of (1+sqrt(5))/2 on the integrality gap of the path-variant Held-Karp relaxation. The techniques devised in this paper can be applied to other optimization problems as well: these applications include improved approximation algorithms and improved LP integrality gap upper bounds for the prize-collecting s-t path problem and the unit-weight graphical metric s-t path TSP. | Improving Christofides' Algorithm for the s-t Path TSP | 4,793 |
A number of recent results on optimization problems involving submodular functions have made use of the multilinear relaxation of the problem. These results hold typically in the value oracle model, where the objective function is accessible via a black box returning f(S) for a given S. We present a general approach to deriving inapproximability results in the value oracle model, based on the notion of symmetry gap. Our main result is that for any fixed instance that exhibits a certain symmetry gap in its multilinear relaxation, there is a naturally related class of instances for which a better approximation factor than the symmetry gap would require exponentially many oracle queries. This unifies several known hardness results for submodular maximization, and implies several new ones. In particular, we prove that there is no constant-factor approximation for the problem of maximizing a non-negative submodular function over the bases of a matroid. We also provide a closely matching approximation algorithm for this problem. | Symmetry and approximability of submodular maximization problems | 4,794 |
We consider the problem of indexing a string $t$ of length $n$ to report the occurrences of a query pattern $p$ containing $m$ characters and $j$ wildcards. Let $occ$ be the number of occurrences of $p$ in $t$, and $\sigma$ the size of the alphabet. We obtain the following results. - A linear space index with query time $O(m+\sigma^j \log \log n + occ)$. This significantly improves the previously best known linear space index by Lam et al. [ISAAC 2007], which requires query time $\Theta(jn)$ in the worst case. - An index with query time $O(m+j+occ)$ using space $O(\sigma^{k^2} n \log^k \log n)$, where $k$ is the maximum number of wildcards allowed in the pattern. This is the first non-trivial bound with this query time. - A time-space trade-off, generalizing the index by Cole et al. [STOC 2004]. We also show that these indexes can be generalized to allow variable length gaps in the pattern. Our results are obtained using a novel combination of well-known and new techniques, which could be of independent interest. | String Indexing for Patterns with Wildcards | 4,795 |
The generalized 2-server problem is an online optimization problem where a sequence of requests has to be served at minimal cost. Requests arrive one by one and need to be served instantly by at least one of two servers. We consider the general model where the cost function of the two servers may be different. Formally, each server moves in its own metric space and a request consists of one point in each metric space. It is served by moving one of the two servers to its request point. Requests have to be served without knowledge of the future requests. The objective is to minimize the total traveled distance. The special case where both servers move on the real line is known as the CNN-problem. We show that the generalized work function algorithm is constant competitive for the generalized 2-server problem. | The generalized work function algorithm is competitive for the
generalized 2-server problem | 4,796 |
Given $n$ length-$\ell$ strings $S =\{s_1, ..., s_n\}$ over a constant size alphabet $\Sigma$ together with parameters $d$ and $k$, the objective in the {\em Consensus String with Outliers} problem is to find a subset $S^*$ of $S$ of size $n-k$ and a string $s$ such that $\sum_{s_i \in S^*} d(s_i, s) \leq d$. Here $d(x, y)$ denotes the Hamming distance between the two strings $x$ and $y$. We prove 1. a variant of {\em Consensus String with Outliers} where the number of outliers $k$ is fixed and the objective is to minimize the total distance $\sum_{s_i \in S^*} d(s_i, s)$ admits a simple PTAS. (ii) Under the natural assumption that the number of outliers $k$ is small, the PTAS for the distance minimization version of {\em Consensus String with Outliers} performs well. In particular, as long as $k\leq cn$ for a fixed constant $c < 1$, the algorithm provides a $(1+\epsilon)$-approximate solution in time $f(1/\epsilon)(n\ell)^{O(1)}$ and thus, is an EPTAS. 2. In order to improve the PTAS for {\em Consensus String with Outliers} to an EPTAS, the assumption that $k$ is small is necessary. Specifically, when $k$ is allowed to be arbitrary the {\em Consensus String with Outliers} problem does not admit an EPTAS unless FPT=W[1]. This hardness result holds even for binary alphabets. 3. The decision version of {\em Consensus String with Outliers} is fixed parameter tractable when parameterized by $\frac{d}{n-k}$. and thus, also when parameterized by just $d$. To the best of our knowledge, {\em Consensus String with Outliers} is the first problem that admits a PTAS, and is fixed parameter tractable when parameterized by the value of the objective function but does not admit an EPTAS under plausible complexity assumptions. | Outlier Detection for DNA Fragment Assembly | 4,797 |
Computing accurate low rank approximations of large matrices is a fundamental data mining task. In many applications however the matrix contains sensitive information about individuals. In such case we would like to release a low rank approximation that satisfies a strong privacy guarantee such as differential privacy. Unfortunately, to date the best known algorithm for this task that satisfies differential privacy is based on naive input perturbation or randomized response: Each entry of the matrix is perturbed independently by a sufficiently large random noise variable, a low rank approximation is then computed on the resulting matrix. We give (the first) significant improvements in accuracy over randomized response under the natural and necessary assumption that the matrix has low coherence. Our algorithm is also very efficient and finds a constant rank approximation of an m x n matrix in time O(mn). Note that even generating the noise matrix required for randomized response already requires time O(mn). | Beating Randomized Response on Incoherent Matrices | 4,798 |
Allocation of balls into bins is a well studied abstraction for load balancing problems.The literature hosts numerous results for sequential(single dimensional) allocation case when m balls are thrown into n bins. In this paper we study the symmetric multiple choice process for both unweighted and weighted balls as well as for both multidimensional and scalar models.Additionally,we present the results on bounds on gap for (1+beta) choice process with multidimensional balls and bins. We show that for the symmetric d choice process and with m=O(n), the upper bound on the gap is O(lnln(n)) w.h.p.This upper bound on the gap is within D=f factor of the lower bound. This is the first such tight result.For the general case of m>>n the expected gap is bounded by O(lnln(n)).For variable f and non-uniform distribution of the populated dimensions,we obtain the upper bound on the expected gap as O(log(n)). Further,for the multiple round parallel balls and bins,we show that the gap is also bounded by O(loglog(n)) for m=O(n).The same bound holds for the expected gap when m>>n. Our analysis also has strong implications in the sequential scalar case.For the weighted balls and bins and general case m>>n,we show that the upper bound on the expected gap is O(log(n)) which improves upon the best prior bound of n^c.Moreover,we show that for the (1 + beta) choice process and m=O(n) the upper bound(assuming uniform distribution of f populated dimensions over D total dimensions) on the gap is O(log(n)/beta),which is within D=f factor of the lower bound.For fixed f with non-uniform distribution and for random f with Binomial distribution the expected gap remains O(log(n)/beta) independent of the total number of balls thrown. This is the first such tight result for (1 +beta) paradigm with multidimensional balls and bins. | Multidimensional Balanced Allocation for Multiple Choice & (1 + Beta)
Processes | 4,799 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.