text stringlengths 17 3.36M | source stringlengths 3 333 | __index_level_0__ int64 0 518k |
|---|---|---|
In a recent paper, we introduced the simultaneous representation problem (defined for any graph class C) and studied the problem for chordal, comparability and permutation graphs. For interval graphs, the problem is defined as follows. Two interval graphs G_1 and G_2, sharing some vertices I (and the corresponding induced edges), are said to be `simultaneous interval graphs' if there exist interval representations R_1 and R_2 of G_1 and G_2, such that any vertex of I is mapped to the same interval in both R_1 and R_2. Equivalently, G_1 and G_2 are simultaneous interval graphs if there exist edges E' between G_1-I and G_2-I such that G_1 \cup G_2 \cup E' is an interval graph. Simultaneous representation problems are related to simultaneous planar embeddings, and have applications in any situation where it is desirable to consistently represent two related graphs, for example: interval graphs capturing overlaps of DNA fragments of two similar organisms; or graphs connected in time, where one is an updated version of the other. In this paper we give an O(n^2*logn) time algorithm for recognizing simultaneous interval graphs,where n = |G_1 \cup G_2|. This result complements the polynomial time algorithms for recognizing probe interval graphs and provides an efficient algorithm for the interval graph sandwich problem for the special case where the set of optional edges induce a complete bipartite graph. | Simultaneous Interval Graphs | 4,500 |
Clustering under most popular objective functions is NP-hard, even to approximate well, and so unlikely to be efficiently solvable in the worst case. Recently, Bilu and Linial \cite{Bilu09} suggested an approach aimed at bypassing this computational barrier by using properties of instances one might hope to hold in practice. In particular, they argue that instances in practice should be stable to small perturbations in the metric space and give an efficient algorithm for clustering instances of the Max-Cut problem that are stable to perturbations of size $O(n^{1/2})$. In addition, they conjecture that instances stable to as little as O(1) perturbations should be solvable in polynomial time. In this paper we prove that this conjecture is true for any center-based clustering objective (such as $k$-median, $k$-means, and $k$-center). Specifically, we show we can efficiently find the optimal clustering assuming only stability to factor-3 perturbations of the underlying metric in spaces without Steiner points, and stability to factor $2+\sqrt{3}$ perturbations for general metrics. In particular, we show for such instances that the popular Single-Linkage algorithm combined with dynamic programming will find the optimal clustering. We also present NP-hardness results under a weaker but related condition. | Center-based Clustering under Perturbation Stability | 4,501 |
In this research endeavor, some Sequence Alignment Algorithms are detailed that are useful for finding or comparing 1 dimensional (1-D), 2 dimensional (2-D), 3 dimensional (3-D) sequences in or against a parent or mother database which is 1 dimensional (1-D), 2 dimensional (2-D), 3 dimensional (3-D) sequence. Inner Product [1], [2] based schemes are used to lay down such algorithms. Also,in this research, a Sequence Alignment Algorithms is detailed that is useful for finding or comparing an N-Dimensional (N-D) sequence in or against a parent or mother database which N-Dimensional (N-D) sequence. Inner Product [1], [2] based schemes are used to lay down such an algorithm. | One, Two, Three and N Dimensional String Search Algorithms | 4,502 |
Algorithms to generate various combinatorial structures find tremendous importance in computer science. In this paper, we begin by reviewing an algorithm proposed by Rohl that generates all unique permutations of a list of elements which possibly contains repetitions, taking some or all of the elements at a time, in any imposed order. The algorithm uses an auxiliary array that maintains the number of occurrences of each unique element in the input list. We provide a proof of correctness of the algorithm. We then show how one can efficiently generate other combinatorial structures like combinations, subsets, n-Parenthesizations, derangements and integer partitions & compositions with minor changes to the same algorithm. | A Versatile Algorithm to Generate Various Combinatorial Structures | 4,503 |
We consider the offline sorting buffer problem. The input is a sequence of items of different types. All items must be processed one by one by a server. The server is equipped with a random-access buffer of limited capacity which can be used to rearrange items. The problem is to design a scheduling strategy that decides upon the order in which items from the buffer are sent to the server. Each type change incurs unit cost, and thus, the cost minimizing objective is to minimize the total number of type changes for serving the entire sequence. This problem is motivated by various applications in manufacturing processes and computer science, and it has attracted significant attention in the last few years. The main focus has been on online competitive algorithms. Surprisingly little is known on the basic offline problem. In this paper, we show that the sorting buffer problem with uniform cost is NP-hard and, thus, close one of the most fundamental questions for the offline problem. On the positive side, we give an O(1)-approximation algorithm when the scheduler is given a buffer only slightly larger than double the original size. We also give a dynamic programming algorithm for the special case of buffer size two that solves the problem exactly in linear time, improving on the standard DP which runs in cubic time. | The Sorting Buffer Problem is NP-hard | 4,504 |
Two planar graphs G1 and G2 sharing some vertices and edges are `simultaneously planar' if they have planar drawings such that a shared vertex [edge] is represented by the same point [curve] in both drawings. It is an open problem whether simultaneous planarity can be tested efficiently. We give a linear-time algorithm to test simultaneous planarity when the two graphs share a 2-connected subgraph. Our algorithm extends to the case of k planar graphs where each vertex [edge] is either common to all graphs or belongs to exactly one of them. | Testing Simultaneous Planarity when the Common Graph is 2-Connected | 4,505 |
Scheduling with assignment restrictions is an important special case of scheduling unrelated machines which has attracted much attention in the recent past. While a lower bound on approximability of 3/2 is known for its most general setting, subclasses of the problem admit polynomial-time approximation schemes. This note provides a PTAS for tree-like hierarchical structures, improving on a recent 4/3-approximation by Huo and Leung. | A PTAS for Scheduling with Tree Assignment Restrictions | 4,506 |
A critical variable of a satisfiable CNF formula is a variable that has the same value in all satisfying assignments. Using a simple case distinction on the fraction of critical variables of a CNF formula, we improve the running time for 3-SAT from O(1.32216^n) by Rolf [2006] to O(1.32153^n). Using a different approach, Iwama et al. [2010] very recently achieved a running time of O(1.32113^n). Our method nicely combines with theirs, yielding the currently fastest known algorithm with running time O(1.32065^n). We also improve the bound for 4-SAT from O(1.47390^n) [Iwama, Tamaki 2004] to O(1.46928^n), where O(1.46981^n) can be obtained using the methods of [Iwama, Tamaki 2004] and [Rolf 2006]. | Improving PPSZ for 3-SAT using Critical Variables | 4,507 |
We propose and develop an efficient implementation of the robust tabu search heuristic for sparse quadratic assignment problems. The traditional implementation of the heuristic applicable to all quadratic assignment problems is of O(N^2) complexity per iteration for problems of size N. Using multiple priority queues to determine the next best move instead of scanning all possible moves, and using adjacency lists to minimize the operations needed to determine the cost of moves, we reduce the asymptotic complexity per iteration to O(N log N ). For practical sized problems, the complexity is O(N). | An Efficient Implementation of the Robust Tabu Search Heuristic for
Sparse Quadratic Assignment Problems | 4,508 |
Kernelization algorithms, usually a preprocessing step before other more traditional algorithms, are very special in the sense that they return (reduced) instances, instead of final results. This characteristic excludes the freedom of applying a kernelization algorithm for the weighted version of a problem to its unweighted instances. Thus with only very few special cases, kernelization algorithms have to be studied separately for weigthed and unweighted versions of a single problem. {\sc feedback arc set on tournament} is currently a very popular problem in recent research of parameterized, as well as approximation computation, and its wide applications in many areas make it appear in all top conferences. The theory of graph modular decompositions is a general approach in the study of graph structures, which only had its surfaces touched in previous work on kernelization algorithms of {\sc feedback arc set on tournament}. In this paper, we study further properties of graph modular decompositions and apply them to obtain the first linear kernel for the unweighted {\sc feedback arc set on tournament} problem, which only admits linear kernel in its weighted version, while quadratic kernel for the unweighted. | FAST: Kernelization based on Graph Modular Decomposition | 4,509 |
Given a point set S and an unknown metric d on S, we study the problem of efficiently partitioning S into k clusters while querying few distances between the points. In our model we assume that we have access to one versus all queries that given a point s in S return the distances between s and all other points. We show that given a natural assumption about the structure of the instance, we can efficiently find an accurate clustering using only O(k) distance queries. Our algorithm uses an active selection strategy to choose a small set of points that we call landmarks, and considers only the distances between landmarks and other points to produce a clustering. We use our algorithm to cluster proteins by sequence similarity. This setting nicely fits our model because we can use a fast sequence database search program to query a sequence against an entire dataset. We conduct an empirical study that shows that even though we query a small fraction of the distances between the points, we produce clusterings that are close to a desired clustering given by manual classification. | Efficient Clustering with Limited Distance Information | 4,510 |
Recent cognitive experiments have shown that the negative impact of an edge crossing on the human understanding of a graph drawing, tends to be eliminated in the case where the crossing angles are greater than 70 degrees. This motivated the study of RAC drawings, in which every pair of crossing edges intersects at right angle. In this work, we demonstrate a class of graphs with unique RAC combinatorial embedding and we employ members of this class in order to show that it is NP-hard to decide whether a graph admits a straight-line RAC drawing. | The Straight-Line RAC Drawing Problem is NP-Hard | 4,511 |
A priority queue is presented that supports the operations insert and find-min in worst-case constant time, and delete and delete-min on element x in worst-case O(lg(min{w_x, q_x}+2)) time, where w_x (respectively q_x) is the number of elements inserted after x (respectively before x) and are still present at the time of the deletion of x. Our priority queue then has both the working-set and the queueish properties, and more strongly it satisfies these properties in the worst-case sense. We also define a new distribution-sensitive property---the time-finger property, which encapsulates and generalizes both the working-set and queueish properties, and present a priority queue that satisfies this property. In addition, we prove a strong implication that the working-set property is equivalent to the unified bound (which is the minimum per operation among the static finger, static optimality, and the working-set bounds). This latter result is of tremendous interest by itself as it had gone unnoticed since the introduction of such bounds by Sleater and Tarjan [JACM 1985]. Accordingly, our priority queue satisfies other distribution-sensitive properties as the static finger, static optimality, and the unified bound. | Priority Queues with Multiple Time Fingers | 4,512 |
Relative worst order analysis is a supplement or alternative to competitive analysis which has been shown to give results more in accordance with observed behavior of online algorithms for a range of different online problems. The contribution of this paper is twofold. First, it adds the static list accessing problem to the collection of online problems where relative worst order analysis gives better results. Second, and maybe more interesting, it adds the non-trivial supplementary proof technique of list factoring to the theoretical toolbox for relative worst order analysis. | List Factoring and Relative Worst Order Analysis | 4,513 |
We propose a method to exponentially speed up computation of various fingerprints, such as the ones used to compute similarity and rarity in massive data sets. Rather then maintaining the full stream of $b$ items of a universe $[u]$, such methods only maintain a concise fingerprint of the stream, and perform computations using the fingerprints. The computations are done approximately, and the required fingerprint size $k$ depends on the desired accuracy $\epsilon$ and confidence $\delta$. Our technique maintains a single bit per hash function, rather than a single integer, thus requiring a fingerprint of length $k = O(\frac{\ln \frac{1}{\delta}}{\epsilon^2})$ bits, rather than $O(\log u \cdot \frac{\ln \frac{1}{\delta}}{\epsilon^2})$ bits required by previous approaches. The main advantage of the fingerprints we propose is that rather than computing the fingerprint of a stream of $b$ items in time of $O(b \cdot k)$, we can compute it in time $O(b \log k)$. Thus this allows an exponential speedup for the fingerprint construction, or alternatively allows achieving a much higher accuracy while preserving computation time. Our methods rely on a specific family of pseudo-random hashes for which we can quickly locate hashes resulting in small values. | Fast Pseudo-Random Fingerprints | 4,514 |
LRM-Trees are an elegant way to partition a sequence of values into sorted consecutive blocks, and to express the relative position of the first element of each block within a previous block. They were used to encode ordinal trees and to index integer arrays in order to support range minimum queries on them. We describe how they yield many other convenient results in a variety of areas, from data structures to algorithms: some compressed succinct indices for range minimum queries; a new adaptive sorting algorithm; and a compressed succinct data structure for permutations supporting direct and indirect application in time all the shortest as the permutation is compressible. | LRM-Trees: Compressed Indices, Adaptive Sorting, and Compressed
Permutations | 4,515 |
We study the L1 minimization problem with additional box constraints. We motivate the problem with two different views of optimality considerations. We look into imposing such constraints in projected gradient techniques and propose a worst case linear time algorithm to perform such projections. We demonstrate the merits and effectiveness of our algorithms on synthetic as well as real experiments. | L1 Projections with Box Constraints | 4,516 |
For almost two decades the question of whether tabu search (TS) or simulated annealing (SA) performs better for the quadratic assignment problem has been unresolved. To answer this question satisfactorily, we compare performance at various values of targeted solution quality, running each heuristic at its optimal number of iterations for each target. We find that for a number of varied problem instances, SA performs better for higher quality targets while TS performs better for lower quality targets. | Comparative Performance of Tabu Search and Simulated Annealing
Heuristics for the Quadratic Assignment Problem | 4,517 |
In the oblivious buy-at-bulk network design problem in a graph, the task is to compute a fixed set of paths for every pair of source-destinations in the graph, such that any set of demands can be routed along these paths. The demands could be aggregated at intermediate edges where the fusion-cost is specified by a canonical (non-negative concave) function $f$. We give a novel algorithm for planar graphs which is oblivious with respect to the demands, and is also oblivious with respect to the fusion function $f$. The algorithm is deterministic and computes the fixed set of paths in polynomial time, and guarantees a $O(\log n)$ approximation ratio for any set of demands and any canonical fusion function $f$, where $n$ is the number of nodes. The algorithm is asymptotically optimal, since it is known that this problem cannot be approximated with better than $\Omega(\log n)$ ratio. To our knowledge, this is the first tight analysis for planar graphs, and improves the approximation ratio by a factor of $\log n$ with respect to previously known results. | Oblivious Buy-at-Bulk in Planar Graphs | 4,518 |
This paper introduces a special family of randomized algorithms for Max DICUT that we call oblivious algorithms. Let the bias of a vertex be the ratio between the total weight of its outgoing edges and the total weight of all its edges. An oblivious algorithm selects at random in which side of the cut to place a vertex v, with probability that only depends on the bias of v, independently of other vertices. The reader may observe that the algorithm that ignores the bias and chooses each side with probability 1/2 has an approximation ratio of 1/4, whereas no oblivious algorithm can have an approximation ratio better than 1/2 (with an even directed cycle serving as a negative example). We attempt to characterize the best approximation ratio achievable by oblivious algorithms, and present results that are nearly tight. The paper also discusses natural extensions of the notion of oblivious algorithms, and extensions to the more general problem of Max 2-AND. | Oblivious Algorithms for the Maximum Directed Cut Problem | 4,519 |
Very recently a new algorithm to the nonnegative single-source shortest path problem on road networks has been discovered. It is very cache-efficient, but only on static road networks. We show how to augment it to the time-dependent scenario. The advantage if the new approach is that it settles nodes, even for a profile query, by scanning all downward edges. We improve the scanning of the downward edges with techniques developed for time-dependent many-to-many computations. | Engineering Time-dependent One-To-All Computation | 4,520 |
We give O(log^2 n)-approximation algorithm based on the cut-matching framework of [10, 13, 14] for computing the sparsest cut on directed graphs. Our algorithm uses only O(log^2 n) single commodity max-flow computations and thus breaks the multicommodity-flow barrier for computing the sparsest cut on directed graphs | Cut-Matching Games on Directed Graphs | 4,521 |
In this paper, we use a new method to decrease the parameterized complexity bound for finding the minimum vertex cover of connected max-degree-3 undirected graphs. The key operation of this method is reduction of the size of a particular subset of edges which we introduce in this paper and is called as "real-cycle" subset. Using "real-cycle" reductions alone we compute a complexity bound $O(1.15855^k)$ where $k$ is size of the optimal vertex cover. Combined with other techniques, the complexity bound can be further improved to be $O(1.1504^k)$. This is currently the best complexity bound. | Improved Complexity Bound of Vertex Cover for Low degree Graph | 4,522 |
In two-stage robust optimization the solution to a problem is built in two stages: In the first stage a partial, not necessarily feasible, solution is exhibited. Then the adversary chooses the "worst" scenario from a predefined set of scenarios. In the second stage, the first-stage solution is extended to become feasible for the chosen scenario. The costs at the second stage are larger than at the first one, and the objective is to minimize the total cost paid in the two stages. We give a 2-approximation algorithm for the robust mincut problem and a ({\gamma}+2)-approximation for the robust shortest path problem, where {\gamma} is the approximation ratio for the Steiner tree. This improves the factors (1+\sqrt2) and 2({\gamma}+2) from [Golovin, Goyal and Ravi. Pay today for a rainy day: Improved approximation algorithms for demand-robust min-cut and shortest path problems. STACS 2006]. In addition, our solution for robust shortest path is simpler and more efficient than the earlier ones; this is achieved by a more direct algorithm and analysis, not using some of the standard demand-robust optimization techniques. | Improved approximations for robust mincut and shortest path | 4,523 |
Given an undirected graph $G$, a collection $\{(s_1,t_1),..., (s_k,t_k)\}$ of pairs of vertices, and an integer $p$, the Edge Multicut problem ask if there is a set $S$ of at most $p$ edges such that the removal of $S$ disconnects every $s_i$ from the corresponding $t_i$. Vertex Multicut is the analogous problem where $S$ is a set of at most $p$ vertices. Our main result is that both problems can be solved in time $2^{O(p^3)}... n^{O(1)}$, i.e., fixed-parameter tractable parameterized by the size $p$ of the cutset in the solution. By contrast, it is unlikely that an algorithm with running time of the form $f(p)... n^{O(1)}$ exists for the directed version of the problem, as we show it to be W[1]-hard parameterized by the size of the cutset. | Fixed-parameter tractability of multicut parameterized by the size of
the cutset | 4,524 |
We show that the set of realizations of a given dimension of a max-plus linear sequence is a finite union of polyhedral sets, which can be computed from any realization of the sequence. This yields an (expensive) algorithm to solve the max-plus minimal realization problem. These results are derived from general facts on rational expressions over idempotent commutative semirings: we show more generally that the set of values of the coefficients of a commutative rational expression in one letter that yield a given max-plus linear sequence is a semi-algebraic set in the max-plus sense. In particular, it is a finite union of polyhedral sets. | The set of realizations of a max-plus linear sequence is semi-polyhedral | 4,525 |
In this paper, we consider the following graph partitioning problem: The input is an undirected graph $G=(V,E),$ a balance parameter $b \in (0,1/2]$ and a target conductance value $\gamma \in (0,1).$ The output is a cut which, if non-empty, is of conductance at most $O(f),$ for some function $f(G, \gamma),$ and which is either balanced or well correlated with all cuts of conductance at most $\gamma.$ Spielman and Teng gave an $\tilde{O}(|E|/\gamma^{2})$-time algorithm for $f= \sqrt{\gamma \log^{3}|V|}$ and used it to decompose graphs into a collection of near-expanders. We present a new spectral algorithm for this problem which runs in time $\tilde{O}(|E|/\gamma)$ for $f=\sqrt{\gamma}.$ Our result yields the first nearly-linear time algorithm for the classic Balanced Separator problem that achieves the asymptotically optimal approximation guarantee for spectral methods. Our method has the advantage of being conceptually simple and relies on a primal-dual semidefinite-programming SDP approach. We first consider a natural SDP relaxation for the Balanced Separator problem. While it is easy to obtain from this SDP a certificate of the fact that the graph has no balanced cut of conductance less than $\gamma,$ somewhat surprisingly, we can obtain a certificate for the stronger correlation condition. This is achieved via a novel separation oracle for our SDP and by appealing to Arora and Kale's framework to bound the running time. Our result contains technical ingredients that may be of independent interest. | Towards an SDP-based Approach to Spectral Methods: A Nearly-Linear-Time
Algorithm for Graph Partitioning and Decomposition | 4,526 |
We consider energy-efficient scheduling on multiprocessors, where the speed of each processor can be individually scaled, and a processor consumes power $s^{\alpha}$ when running at speed $s$, for $\alpha>1$. A scheduling algorithm needs to decide at any time both processor allocations and processor speeds for a set of parallel jobs with time-varying parallelism. The objective is to minimize the sum of the total energy consumption and certain performance metric, which in this paper includes total flow time and makespan. For both objectives, we present instantaneous parallelism clairvoyant (IP-clairvoyant) algorithms that are aware of the instantaneous parallelism of the jobs at any time but not their future characteristics, such as remaining parallelism and work. For total flow time plus energy, we present an $O(1)$-competitive algorithm, which significantly improves upon the best known non-clairvoyant algorithm and is the first constant competitive result on multiprocessor speed scaling for parallel jobs. In the case of makespan plus energy, which is considered for the first time in the literature, we present an $O(\ln^{1-1/\alpha}P)$-competitive algorithm, where $P$ is the total number of processors. We show that this algorithm is asymptotically optimal by providing a matching lower bound. In addition, we also study non-clairvoyant scheduling for total flow time plus energy, and present an algorithm that achieves $O(\ln P)$-competitive for jobs with arbitrary release time and $O(\ln^{1/\alpha}P)$-competitive for jobs with identical release time. Finally, we prove an $\Omega(\ln^{1/\alpha}P)$ lower bound on the competitive ratio of any non-clairvoyant algorithm, matching the upper bound of our algorithm for jobs with identical release time. | Energy-Efficient Multiprocessor Scheduling for Flow Time and Makespan | 4,527 |
Given two testable properties $\mathcal{P}_{1}$ and $\mathcal{P}_{2}$, under what conditions are the union, intersection or set-difference of these two properties also testable? We initiate a systematic study of these basic set-theoretic operations in the context of property testing. As an application, we give a conceptually different proof that linearity is testable, albeit with much worse query complexity. Furthermore, for the problem of testing disjunction of linear functions, which was previously known to be one-sided testable with a super-polynomial query complexity, we give an improved analysis and show it has query complexity $O(1/\eps^2)$, where $\eps$ is the distance parameter. | Property Testing via Set-Theoretic Operations | 4,528 |
Let $G=(V,E)$ be a graph on $n$ vertices and $R$ be a set of pairs of vertices in $V$ called \emph{requests}. A \emph{multicut} is a subset $F$ of $E$ such that every request $xy$ of $R$ is cut by $F$, \i.e. every $xy$-path of $G$ intersects $F$. We show that there exists an $O(f(k)n^c)$ algorithm which decides if there exists a multicut of size at most $k$. In other words, the \M{} problem parameterized by the solution size $k$ is Fixed-Parameter Tractable. The proof extends to vertex multicuts. | Multicut is FPT | 4,529 |
We analyze the so-called ppz algorithm for (d,k)-CSP problems for general values of d (number of values a variable can take) and k (number of literals per constraint). To analyze its success probability, we prove a correlation inequality for submodular functions. | PPZ For More Than Two Truth Values - An Algorithm for Constraint
Satisfaction Problems | 4,530 |
We study the problem of Upward Point-Set Embeddability, that is the problem of deciding whether a given upward planar digraph $D$ has an upward planar embedding into a point set $S$. We show that any switch tree admits an upward planar straight-line embedding into any convex point set. For the class of $k$-switch trees, that is a generalization of switch trees (according to this definition a switch tree is a $1$-switch tree), we show that not every $k$-switch tree admits an upward planar straight-line embedding into any convex point set, for any $k \geq 2$. Finally we show that the problem of Upward Point-Set Embeddability is NP-complete. | Upward Point-Set Embeddability | 4,531 |
A mixed graph is a graph with both directed and undirected edges. We present an algorithm for deciding whether a given mixed graph on $n$ vertices contains a feedback vertex set (FVS) of size at most $k$, in time $2^{O(k)}k! O(n^4)$. This is the first fixed parameter tractable algorithm for FVS that applies to both directed and undirected graphs. | Feedback Vertex Set in Mixed Graphs | 4,532 |
We study the problem of learning to rank from pairwise preferences, and solve a long-standing open problem that has led to development of many heuristics but no provable results for our particular problem. Given a set $V$ of $n$ elements, we wish to linearly order them given pairwise preference labels. A pairwise preference label is obtained as a response, typically from a human, to the question "which if preferred, u or v?$ for two elements $u,v\in V$. We assume possible non-transitivity paradoxes which may arise naturally due to human mistakes or irrationality. The goal is to linearly order the elements from the most preferred to the least preferred, while disagreeing with as few pairwise preference labels as possible. Our performance is measured by two parameters: The loss and the query complexity (number of pairwise preference labels we obtain). This is a typical learning problem, with the exception that the space from which the pairwise preferences is drawn is finite, consisting of ${n\choose 2}$ possibilities only. We present an active learning algorithm for this problem, with query bounds significantly beating general (non active) bounds for the same error guarantee, while almost achieving the information theoretical lower bound. Our main construct is a decomposition of the input s.t. (i) each block incurs high loss at optimum, and (ii) the optimal solution respecting the decomposition is not much worse than the true opt. The decomposition is done by adapting a recent result by Kenyon and Schudy for a related combinatorial optimization problem to the query efficient setting. We thus settle an open problem posed by learning-to-rank theoreticians and practitioners: What is a provably correct way to sample preference labels? To further show the power and practicality of our solution, we show how to use it in concert with an SVM relaxation. | An Active Learning Algorithm for Ranking from Pairwise Preferences with
an Almost Optimal Query Complexity | 4,533 |
Alon and Krivelevich (SIAM J. Discrete Math. 15(2): 211-227 (2002)) show that if a graph is {\epsilon}-far from bipartite, then the subgraph induced by a random subset of O(1/{\epsilon}) vertices is bipartite with high probability. We conjecture that the induced subgraph is {\Omega}~({\epsilon})-far from bipartite with high probability. Gonen and Ron (RANDOM 2007) proved this conjecture in the case when the degrees of all vertices are at most O({\epsilon}n). We give a more general proof that works for any d-regular (or almost d-regular) graph for arbitrary degree d. Assuming this conjecture, we prove that bipartiteness is testable with one-sided error in time O(1/{\epsilon}^c), where c is a constant strictly smaller than two, improving upon the tester of Alon and Krivelevich. As it is known that non-adaptive testers for bipartiteness require {\Omega}(1/{\epsilon}^2) queries (Bogdanov and Trevisan, CCC 2004), our result shows, assuming the conjecture, that adaptivity helps in testing bipartiteness. | A better tester for bipartiteness? | 4,534 |
One of the classic results in scheduling theory is the 2-approximation algorithm by Lenstra, Shmoys, and Tardos for the problem of scheduling jobs to minimize makespan on unrelated machines, i.e., job j requires time p_{ij} if processed on machine i. More than two decades after its introduction it is still the algorithm of choice even in the restricted model where processing times are of the form p_{ij} in {p_j, \infty}. This problem, also known as the restricted assignment problem, is NP-hard to approximate within a factor less than 1.5 which is also the best known lower bound for the general version. Our main result is a polynomial time algorithm that estimates the optimal makespan of the restricted assignment problem within a factor 33/17 + \epsilon \approx 1.9412 + \epsilon, where \epsilon > 0 is an arbitrarily small constant. The result is obtained by upper bounding the integrality gap of a certain strong linear program, known as configuration LP, that was previously successfully used for the related Santa Claus problem. Similar to the strongest analysis for that problem our proof is based on a local search algorithm that will eventually find a schedule of the mentioned approximation guarantee, but is not known to converge in polynomial time. | Santa Claus Schedules Jobs on Unrelated Machines | 4,535 |
We study a combinatorial problem arising from microarrays synthesis. The synthesis is done by a light-directed chemical process. The objective is to minimize unintended illumination that may contaminate the quality of experiments. Unintended illumination is measured by a notion called border length and the problem is called Border Minimization Problem (BMP). The objective of the BMP is to place a set of probe sequences in the array and find an embedding (deposition of nucleotides/residues to the array cells) such that the sum of border length is minimized. A variant of the problem, called P-BMP, is that the placement is given and the concern is simply to find the embedding. Approximation algorithms have been previously proposed for the problem but it is unknown whether the problem is NP-hard or not. In this paper, we give a thorough study of different variations of BMP by giving NP-hardness proofs and improved approximation algorithms. We show that P-BMP, 1D-BMP, and BMP are all NP-hard. Contrast with the previous result that 1D-P-BMP is polynomial time solvable, the interesting implications include (i) the array dimension (1D or 2D) differentiates the complexity of P-BMP; (ii) for 1D array, whether placement is given differentiates the complexity of BMP; (iii) BMP is NP-hard regardless of the dimension of the array. Another contribution of the paper is improving the approximation for BMP from $O(n^{1/2} \log^2 n)$ to $O(n^{1/4} \log^2 n)$, where $n$ is the total number of sequences. | Hardness and Approximation of The Asynchronous Border Minimization
Problem | 4,536 |
Suppose we have n keys, n access probabilities for the keys, and n+1 access probabilities for the gaps between the keys. Let h_min(n) be the minimal height of a binary search tree for n keys. We consider the problem to construct an optimal binary search tree with near minimal height, i.e.\ with height h <= h_min(n) + Delta for some fixed Delta. It is shown, that for any fixed Delta optimal binary search trees with near minimal height can be constructed in time O(n^2). This is as fast as in the unrestricted case. So far, the best known algorithms for the construction of height-restricted optimal binary search trees have running time O(L n^2), whereby L is the maximal permitted height. Compared to these algorithms our algorithm is at least faster by a factor of log n, because L is lower bounded by log n. | Optimal Binary Search Trees with Near Minimal Height | 4,537 |
We present a new data structure called the \emph{Compressed Random Access Memory} (CRAM) that can store a dynamic string $T$ of characters, e.g., representing the memory of a computer, in compressed form while achieving asymptotically almost-optimal bounds (in terms of empirical entropy) on the compression ratio. It allows short substrings of $T$ to be decompressed and retrieved efficiently and, significantly, characters at arbitrary positions of $T$ to be modified quickly during execution \emph{without decompressing the entire string}. This can be regarded as a new type of data compression that can update a compressed file directly. Moreover, at the cost of slightly increasing the time spent per operation, the CRAM can be extended to also support insertions and deletions. Our key observation that the empirical entropy of a string does not change much after a small change to the string, as well as our simple yet efficient method for maintaining an array of variable-length blocks under length modifications, may be useful for many other applications as well. | CRAM: Compressed Random Access Memory | 4,538 |
We show that for every fixed undirected graph $H$, there is a $O(|V(G)|^3)$ time algorithm that tests, given a graph $G$, if $G$ contains $H$ as a topological subgraph (that is, a subdivision of $H$ is subgraph of $G$). This shows that topological subgraph testing is fixed-parameter tractable, resolving a longstanding open question of Downey and Fellows from 1992. As a corollary, for every $H$ we obtain an $O(|V(G)|^3)$ time algorithm that tests if there is an immersion of $H$ into a given graph $G$. This answers another open question raised by Downey and Fellows in 1992. | Finding topological subgraphs is fixed-parameter tractable | 4,539 |
Due to its optimality on a single machine for the problem of minimizing average flow time, Shortest-Remaining-Processing-Time (\srpt) appears to be the most natural algorithm to consider for the problem of minimizing average flow time on multiple identical machines. It is known that $\srpt$ achieves the best possible competitive ratio on multiple machines up to a constant factor. Using resource augmentation, $\srpt$ is known to achieve total flow time at most that of the optimal solution when given machines of speed $2- \frac{1}{m}$. Further, it is known that $\srpt$'s competitive ratio improves as the speed increases; $\srpt$ is $s$-speed $\frac{1}{s}$-competitive when $s \geq 2- \frac{1}{m}$. However, a gap has persisted in our understanding of $\srpt$. Before this work, the performance of $\srpt$ was not known when $\srpt$ is given $(1+\eps)$-speed when $0 < \eps < 1-\frac{1}{m}$, even though it has been thought that $\srpt$ is $(1+\eps)$-speed $O(1)$-competitive for over a decade. Resolving this question was suggested in Open Problem 2.9 from the survey "Online Scheduling" by Pruhs, Sgall, and Torng \cite{PruhsST}, and we answer the question in this paper. We show that $\srpt$ is \emph{scalable} on $m$ identical machines. That is, we show $\srpt$ is $(1+\eps)$-speed $O(\frac{1}{\eps})$-competitive for $\eps >0$. We complement this by showing that $\srpt$ is $(1+\eps)$-speed $O(\frac{1}{\eps^2})$-competitive for the objective of minimizing the $\ell_k$-norms of flow time on $m$ identical machines. Both of our results rely on new potential functions that capture the structure of \srpt. Our results, combined with previous work, show that $\srpt$ is the best possible online algorithm in essentially every aspect when migration is permissible. | Online Scheduling on Identical Machines using SRPT | 4,540 |
Consider an optimization problem with $n$ binary variables and $d+1$ linear objective functions. Each valid solution $x \in \{0,1\}^n$ gives rise to an objective vector in $\R^{d+1}$, and one often wants to enumerate the Pareto optima among them. In the worst case there may be exponentially many Pareto optima; however, it was recently shown that in (a generalization of) the smoothed analysis framework, the expected number is polynomial in $n$. Unfortunately, the bound obtained had a rather bad dependence on $d$; roughly $n^{d^d}$. In this paper we show a significantly improved bound of $n^{2d}$. Our proof is based on analyzing two algorithms. The first algorithm, on input a Pareto optimal $x$, outputs a "testimony" containing clues about $x$'s objective vector, $x$'s coordinates, and the region of space $B$ in which $x$'s objective vector lies. The second algorithm can be regarded as a {\em speculative} execution of the first -- it can uniquely reconstruct $x$ from the testimony's clues and just \emph{some} of the probability space's outcomes. The remainder of the probability space's outcomes are just enough to bound the probability that $x$'s objective vector falls into the region $B$. | Pareto Optimal Solutions for Smoothed Analysts | 4,541 |
We study sorting algorithms based on randomized round-robin comparisons. Specifically, we study Spin-the-bottle sort, where comparisons are unrestricted, and Annealing sort, where comparisons are restricted to a distance bounded by a \emph{temperature} parameter. Both algorithms are simple, randomized, data-oblivious sorting algorithms, which are useful in privacy-preserving computations, but, as we show, Annealing sort is much more efficient. We show that there is an input permutation that causes Spin-the-bottle sort to require $\Omega(n^2\log n)$ expected time in order to succeed, and that in $O(n^2\log n)$ time this algorithm succeeds with high probability for any input. We also show there is an implementation of Annealing sort that runs in $O(n\log n)$ time and succeeds with very high probability. | Spin-the-bottle Sort and Annealing Sort: Oblivious Sorting via
Round-robin Random Comparisons | 4,542 |
In a ground-breaking paper, Indyk and Woodruff (STOC 05) showed how to compute $F_k$ (for $k>2$) in space complexity $O(\mbox{\em poly-log}(n,m)\cdot n^{1-\frac2k})$, which is optimal up to (large) poly-logarithmic factors in $n$ and $m$, where $m$ is the length of the stream and $n$ is the upper bound on the number of distinct elements in a stream. The best known lower bound for large moments is $\Omega(\log(n)n^{1-\frac2k})$. A follow-up work of Bhuvanagiri, Ganguly, Kesh and Saha (SODA 2006) reduced the poly-logarithmic factors of Indyk and Woodruff to $O(\log^2(m)\cdot (\log n+ \log m)\cdot n^{1-{2\over k}})$. Further reduction of poly-log factors has been an elusive goal since 2006, when Indyk and Woodruff method seemed to hit a natural "barrier." Using our simple recursive sketch, we provide a different yet simple approach to obtain a $O(\log(m)\log(nm)\cdot (\log\log n)^4\cdot n^{1-{2\over k}})$ algorithm for constant $\epsilon$ (our bound is, in fact, somewhat stronger, where the $(\log\log n)$ term can be replaced by any constant number of $\log $ iterations instead of just two or three, thus approaching $log^*n$. Our bound also works for non-constant $\epsilon$ (for details see the body of the paper). Further, our algorithm requires only $4$-wise independence, in contrast to existing methods that use pseudo-random generators for computing large frequency moments. | Recursive Sketching For Frequency Moments | 4,543 |
The celebrated dimension reduction lemma of Johnson and Lindenstrauss has numerous computational and other applications. Due to its application in practice, speeding up the computation of a Johnson-Lindenstrauss style dimension reduction is an important question. Recently, Dasgupta, Kumar, and Sarlos (STOC 2010) constructed such a transform that uses a sparse matrix. This is motivated by the desire to speed up the computation when applied to sparse input vectors, a scenario that comes up in applications. The sparsity of their construction was further improved by Kane and Nelson (ArXiv 2010). We improve the previous bound on the number of non-zero entries per column of Kane and Nelson from $O(1/\epsilon \log(1/\delta)\log(k/\delta))$ (where the target dimension is $k$, the distortion is $1\pm \epsilon$, and the failure probability is $\delta$) to $$ O\left({1\over\epsilon} \left({\log(1/\delta)\log\log\log(1/\delta) \over \log\log(1/\delta)}\right)^2\right). $$ We also improve the amount of randomness needed to generate the matrix. Our results are obtained by connecting the moments of an order 2 Rademacher chaos to the combinatorial properties of random Eulerian multigraphs. Estimating the chance that a random multigraph is composed of a given number of node-disjoint Eulerian components leads to a new tail bound on the chaos. Our estimates may be of independent interest, and as this part of the argument is decoupled from the analysis of the coefficients of the chaos, we believe that our methods can be useful in the analysis of other chaoses. | Rademacher Chaos, Random Eulerian Graphs and The Sparse
Johnson-Lindenstrauss Transform | 4,544 |
In this paper we study minimum cut and maximum flow problems on planar graphs, both in static and in dynamic settings. First, we present an algorithm that given an undirected planar graph computes the minimum cut between any two given vertices in O(n log log n) time. Second, we show how to achieve the same O(n log log n) bound for the problem of computing maximum flows in undirected planar graphs. To the best of our knowledge, these are the first algorithms for those two problems that break the O(n log n) barrier, which has been standing for more than 25 years. Third, we present a fully dynamic algorithm that is able to maintain information about minimum cuts and maximum flows in a plane graph (i.e., a planar graph with a fixed embedding): our algorithm is able to insert edges, delete edges and answer min-cut and max-flow queries between any pair of vertices in O(n^(2/3) log^3 n) time per operation. This result is based on a new dynamic shortest path algorithm for planar graphs which may be of independent interest. We remark that this is the first known non-trivial algorithm for min-cut and max-flow problems in a dynamic setting. | Improved Minimum Cuts and Maximum Flows in Undirected Planar Graphs | 4,545 |
In this paper, we explore worst-case solutions for the problems of single and multiple matching on strings in the word RAM model with word length w. In the first problem, we have to build a data structure based on a pattern p of length m over an alphabet of size sigma such that we can answer to the following query: given a text T of length n, where each character is encoded using log(sigma) bits return the positions of all the occurrences of p in T (in the following we refer by occ to the number of reported occurrences). For the multi-pattern matching problem we have a set S of d patterns of total length m and a query on a text T consists in finding all positions of all occurrences in T of the patterns in S. As each character of the text is encoded using log sigma bits and we can read w bits in constant time in the RAM model, we assume that we can read up to (w/log sigma) consecutive characters of the text in one time step. This implies that the fastest possible query time for both problems is O((n(log sigma/w)+occ). In this paper we present several different results for both problems which come close to that best possible query time. We first present two different linear space data structures for the first and second problem: the first one answers to single pattern matching queries in time O(n(1/m+log sigma/w)+occ) while the second one answers to multiple pattern matching queries to O(n((log d+log y+log log d)/y+log sigma/w)+occ) where y is the length of the shortest pattern in the case of multiple pattern-matching. We then show how a simple application of the four russian technique permits to get data structures with query times independent of the length of the shortest pattern (the length of the only pattern in case of single string matching) at the expense of using more space. | Worst case efficient single and multiple string matching in the Word-RAM
model | 4,546 |
Suppose we are asked to preprocess a string \(s [1..n]\) such that later, given a substring's endpoints, we can quickly count how many distinct characters it contains. In this paper we give a data structure for this problem that takes \(n H_0 (s) + \Oh{n} + \oh{n H_0 (s)}\) bits, where \(H_0 (s)\) is the 0th-order empirical entropy of $s$, and answers queries in $\Oh{\log^{1 + \epsilon} n}$ time for any constant \(\epsilon > 0\). We also show how our data structure can be made partially dynamic. | Counting Colours in Compressed Strings | 4,547 |
Suppose we have just performed searches in a self-index for two patterns $A$ and $B$ and now we want to search for their concatenation \A B); how can we best make use of our previous computations? In this paper we consider this problem and, more generally, how we can store a dynamic library of patterns that we can easily manipulate in interesting ways. We give a space- and time-efficient data structure for this problem that is compatible with many of the best self-indexes. | Pattern Kits | 4,548 |
We examine directed spanners through flow-based linear programming relaxations. We design an $\~O(n^{2/3})$-approximation algorithm for the directed $k$-spanner problem that works for all $k\geq 1$, which is the first sublinear approximation for arbitrary edge-lengths. Even in the more restricted setting of unit edge-lengths, our algorithm improves over the previous $\~O(n^{1-1/k})$ approximation of Bhattacharyya et al. when $k\ge 4$. For the special case of $k=3$ we design a different algorithm achieving an $\~O(\sqrt{n})$-approximation, improving the previous $\~O(n^{2/3})$. Both of our algorithms easily extend to the fault-tolerant setting, which has recently attracted attention but not from an approximation viewpoint. We also prove a nearly matching integrality gap of $\Omega(n^{\frac13 - \epsilon})$ for any constant $\epsilon > 0$. A virtue of all our algorithms is that they are relatively simple. Technically, we introduce a new yet natural flow-based relaxation, and show how to approximately solve it even when its size is not polynomial. The main challenge is to design a rounding scheme that "coordinates" the choices of flow-paths between the many demand pairs while using few edges overall. We achieve this, roughly speaking, by randomization at the level of vertices. | Directed Spanners via Flow-Based Linear Programs | 4,549 |
Given a metric space on n points, an {\alpha}-approximate universal algorithm for the Steiner tree problem outputs a distribution over rooted spanning trees such that for any subset X of vertices containing the root, the expected cost of the induced subtree is within an {\alpha} factor of the optimal Steiner tree cost for X. An {\alpha}-approximate differentially private algorithm for the Steiner tree problem takes as input a subset X of vertices, and outputs a tree distribution that induces a solution within an {\alpha} factor of the optimal as before, and satisfies the additional property that for any set X' that differs in a single vertex from X, the tree distributions for X and X' are "close" to each other. Universal and differentially private algorithms for TSP are defined similarly. An {\alpha}-approximate universal algorithm for the Steiner tree problem or TSP is also an {\alpha}-approximate differentially private algorithm. It is known that both problems admit O(logn)-approximate universal algorithms, and hence O(log n)-approximate differentially private algorithms as well. We prove an {\Omega}(logn) lower bound on the approximation ratio achievable for the universal Steiner tree problem and the universal TSP, matching the known upper bounds. Our lower bound for the Steiner tree problem holds even when the algorithm is allowed to output a more general solution of a distribution on paths to the root. | Optimal Lower Bounds for Universal and Differentially Private Steiner
Tree and TSP | 4,550 |
This paper introduces a novel method for compact representation of sets of n-dimensional binary sequences in a form of compact triplets structures (CTS), supposing both logic and arithmetic interpretations of data. Suitable illustration of CTS application is the unique graph-combinatorial model for the classic intractable 3-Satisfiability problem and a polynomial algorithm for the model synthesis. The method used for Boolean formulas analysis and classification by means of the model is defined as a bijective mapping principle for sets of components of discordant structures to a basic set. The statistic computer-aided experiment showed efficiency of the algorithm in a large scale of problem dimension parameters, including those that make enumeration procedures of no use. The formulated principle expands resources of constructive approach to investigation of intractable problems. | Non-Orthodox Combinatorial Models Based on Discordant Structures | 4,551 |
We present two methods to compress the description of a route in a road network, i.e., of a path in a directed graph. The first method represents a path by a sequence of via edges. The subpaths between the via edges have to be unique shortest paths. Instead of via edges also via nodes can be used, though this requires some simple preprocessing. The second method uses contraction hierarchies to replace subpaths of the original path by shortcuts. The two methods can be combined with each other. Also, we propose the application to mobile server based routing: We compute the route on a server which has access to the latest information about congestions for example. Then we transmit the computed route to the car using some mobile radio communication. There, we apply the compression to save costs and transmission time. If the compression works well, we can transmit routes even when the bandwidth is low. Although we have not evaluated our ideas with realistic data yet, they are quite promising. | Compressed Transmission of Route Descriptions | 4,552 |
Randomized algorithms are often enjoyed for their simplicity, but the hash functions used to yield the desired theoretical guarantees are often neither simple nor practical. Here we show that the simplest possible tabulation hashing provides unexpectedly strong guarantees. The scheme itself dates back to Carter and Wegman (STOC'77). Keys are viewed as consisting of c characters. We initialize c tables T_1, ..., T_c mapping characters to random hash codes. A key x=(x_1, ..., x_q) is hashed to T_1[x_1] xor ... xor T_c[x_c]. While this scheme is not even 4-independent, we show that it provides many of the guarantees that are normally obtained via higher independence, e.g., Chernoff-type concentration, min-wise hashing for estimating set intersection, and cuckoo hashing. | The Power of Simple Tabulation Hashing | 4,553 |
We present a new randomized algorithm for computing the diameter of a weighted directed graph. The algorithm runs in $\Ot(M^{\w/(\w+1)}n^{(\w^2+3)/(\w+1)})$ time, where $\w < 2.376$ is the exponent of fast matrix multiplication, $n$ is the number of vertices of the graph, and the edge weights are integers in $\{-M,...,0,...,M\}$. For bounded integer weights the running time is $O(n^{2.561})$ and if $\w=2+o(1)$ it is $\Ot(n^{7/3})$. This is the first algorithm that computes the diameter of an integer weighted directed graph polynomially faster than any known All-Pairs Shortest Paths (APSP) algorithm. For bounded integer weights, the fastest algorithm for APSP runs in $O(n^{2.575})$ time for the present value of $\w$ and runs in $\Ot(n^{2.5})$ time if $\w=2+o(1)$. For directed graphs with {\em positive} integer weights in $\{1,...,M\}$ we obtain a deterministic algorithm that computes the diameter in $\Ot(Mn^\w)$ time. This extends a simple $\Ot(n^\w)$ algorithm for computing the diameter of an {\em unweighted} directed graph to the positive integer weighted setting and is the first algorithm in this setting whose time complexity matches that of the fastest known Diameter algorithm for {\em undirected} graphs. The diameter algorithms are consequences of a more general result. We construct algorithms that for any given integer $d$, report all ordered pairs of vertices having distance {\em at most} $d$. The diameter can therefore be computed using binary search for the smallest $d$ for which all pairs are reported. | Computing the diameter polynomially faster than APSP | 4,554 |
Let $\xi$ be a random integer vector, having uniform distribution \[\mathbf{P} \{\xi = (i_1,i_2,...,i_n) = 1/n^n \} \ \hbox{for} \ 1 \leq i_1,i_2,...,i_n\leq n.\] A realization $(i_1,i_2,...,i_n)$ of $\xi$ is called \textit{good}, if its elements are different. We present algorithms \textsc{Linear}, \textsc{Backward}, \textsc{Forward}, \textsc{Tree}, \textsc{Garbage}, \textsc{Bucket} which decide whether a given realization is good. We analyse the number of comparisons and running time of these algorithms using simulation gathering data on all possible inputs for small values of $n$ and generating random inputs for large values of $n$. | Testing of sequences by simulation | 4,555 |
In this paper we introduce the concept of generalized d-graph (admitting cycles) as special dependency-graphs for modelling dynamic programming (DP) problems. We describe the d-graph versions of three famous single-source shortest algorithms (The algorithm based on the topological order of the vertices, Dijkstra algorithm and Bellman-Ford algorithm), which can be viewed as general DP strategies in the case of three different class of optimization problems. The new modelling method also makes possible to classify DP problems and the corresponding DP strategies in term of graph theory. | Modelling dynamic programming problems by generalized d-graphs | 4,556 |
In this work, we present a comprehensive treatment of weighted random sampling (WRS) over data streams. More precisely, we examine two natural interpretations of the item weights, describe an existing algorithm for each case ([2, 4]), discuss sampling with and without replacement and show adaptations of the algorithms for several WRS problems and evolving data streams. | Weighted Random Sampling over Data Streams | 4,557 |
Knuth [12, Page 417] states that "the (program of the) Fibonaccian search technique looks very mysterious at first glance" and that "it seems to work by magic". In this work, we show that there is even more magic in Fibonaccian (or else Fibonacci) search. We present a generalized Fibonacci procedure that follows perfectly the implicit optimal decision tree for search problems where the cost of each comparison depends on its outcome. | (α, β) Fibonacci Search | 4,558 |
We present an efficient algorithm for finding all approximate occurrences of a given pattern $p$ of length $m$ in a text $t$ of length $n$ allowing for translocations of equal length adjacent factors and inversions of factors. The algorithm is based on an efficient filtering method and has an $\bigO(nm\max(\alpha, \beta))$-time complexity in the worst case and $\bigO(\max(\alpha, \beta))$-space complexity, where $\alpha$ and $\beta$ are respectively the maximum length of the factors involved in any translocation and inversion. Moreover we show that under the assumptions of equiprobability and independence of characters our algorithm has a $\bigO(n)$ average time complexity, whenever $\sigma = \Omega(\log m / \log\log^{1-\epsilon} m)$, where $\epsilon > 0$ and $\sigma$ is the dimension of the alphabet. Experiments show that the new proposed algorithm achieves very good results in practical cases. | String Matching with Inversions and Translocations in Linear Average
Time (Most of the Time) | 4,559 |
The present work analyzes the redundancy of sets of combinatorial objects produced by a weighted random generation algorithm proposed by Denise et al. This scheme associates weights to the terminals symbols of a weighted context-free grammar, extends this weight definition multiplicatively on words, and draws words of length $n$ with probability proportional their weight. We investigate the level of redundancy within a sample of $k$ word, the proportion of the total probability covered by $k$ words (coverage), the time (number of generations) of the first collision, and the time of the full collection. For these four questions, we use an analytic urn analogy to derive asymptotic estimates and/or polynomially computable exact forms. We illustrate these tools by an analysis of an RNA secondary structure statistical sampling algorithm introduced by Ding et al. | Weighted random generation of context-free languages: Analysis of
collisions in random urn occupancy models | 4,560 |
In 2009, Roeglin and Teng showed that the smoothed number of Pareto optimal solutions of linear multi-criteria optimization problems is polynomially bounded in the number $n$ of variables and the maximum density $\phi$ of the semi-random input model for any fixed number of objective functions. Their bound is, however, not very practical because the exponents grow exponentially in the number $d+1$ of objective functions. In a recent breakthrough, Moitra and O'Donnell improved this bound significantly to $O(n^{2d} \phi^{d(d+1)/2})$. An "intriguing problem", which Moitra and O'Donnell formulate in their paper, is how much further this bound can be improved. The previous lower bounds do not exclude the possibility of a polynomial upper bound whose degree does not depend on $d$. In this paper we resolve this question by constructing a class of instances with $\Omega ((n \phi)^{(d-\log{d}) \cdot (1-\Theta{1/\phi})})$ Pareto optimal solutions in expectation. For the bi-criteria case we present a higher lower bound of $\Omega (n^2 \phi^{1 - \Theta{1/\phi}})$, which almost matches the known upper bound of $O(n^2 \phi)$. | Lower Bounds for the Smoothed Number of Pareto optimal Solutions | 4,561 |
In this note we present the worst-character rule, an efficient variation of the bad-character heuristic for the exact string matching problem, firstly introduced in the well-known Boyer-Moore algorithm. Our proposed rule selects a position relative to the current shift which yields the largest average advancement, according to the characters distribution in the text. Experimental results show that the worst-character rule achieves very good results especially in the case of long patterns or small alphabets in random texts and in the case of texts in natural languages. | On Tuning the Bad-Character Rule: the Worst-Character Rule | 4,562 |
The capacitated vehicle routing problem (CVRP) involves distributing (identical) items from a depot to a set of demand locations, using a single capacitated vehicle. We study a generalization of this problem to the setting of multiple vehicles having non-uniform speeds (that we call Heterogenous CVRP), and present a constant-factor approximation algorithm. The technical heart of our result lies in achieving a constant approximation to the following TSP variant (called Heterogenous TSP). Given a metric denoting distances between vertices, a depot r containing k vehicles with possibly different speeds, the goal is to find a tour for each vehicle (starting and ending at r), so that every vertex is covered in some tour and the maximum completion time is minimized. This problem is precisely Heterogenous CVRP when vehicles are uncapacitated. The presence of non-uniform speeds introduces difficulties for employing standard tour-splitting techniques. In order to get a better understanding of this technique in our context, we appeal to ideas from the 2-approximation for scheduling in parallel machine of Lenstra et al.. This motivates the introduction of a new approximate MST construction called Level-Prim, which is related to Light Approximate Shortest-path Trees. The last component of our algorithm involves partitioning the Level-Prim tree and matching the resulting parts to vehicles. This decomposition is more subtle than usual since now we need to enforce correlation between the size of the parts and their distances to the depot. | Capacitated Vehicle Routing with Non-Uniform Speeds | 4,563 |
An approximate sparse recovery system in ell_1 norm formally consists of parameters N, k, epsilon an m-by-N measurement matrix, Phi, and a decoding algorithm, D. Given a vector, x, where x_k denotes the optimal k-term approximation to x, the system approximates x by hat_x = D(Phi.x), which must satisfy ||hat_x - x||_1 <= (1+epsilon)||x - x_k||_1. Among the goals in designing such systems are minimizing m and the runtime of D. We consider the "forall" model, in which a single matrix Phi is used for all signals x. All previous algorithms that use the optimal number m=O(k log(N/k)) of measurements require superlinear time Omega(N log(N/k)). In this paper, we give the first algorithm for this problem that uses the optimum number of measurements (up to a constant factor) and runs in sublinear time o(N) when k=o(N), assuming access to a data structure requiring space and preprocessing O(N). | Sublinear Time, Measurement-Optimal, Sparse Recovery For All | 4,564 |
The visualization of any graph plays important role in various aspects, such as graph drawing software. Complex systems (like large databases or networks) that have a graph structure should be properly visualized in order to avoid obfuscation. One way to provide an aesthetic improvement to a graph visualization is to apply a force-directed drawing algorithm to it. This method, that emerged in the 60's views graphs as spring systems that exert forces (repulsive or attractive) to the nodes. A Lombardi drawing of a graph is a drawing where the edges are drawn as circular arcs (straight edges are considered circular arcs of infinite radius) with perfect angular resolution. This means, that consecutive edges around a vertex are equally spaced around it. In other words, each angle between the tangents of two consecutive edges is equal to $2\pi/d$ where d is the degree of that specific vertex. The requirement of using circular edges in graphs when we want to provide perfect angular resolution is necessary, since even cycle graphs cannot be drawn with straight edges when perfect angular resolution is needed. In this survey, we provide an algorithm that takes as input a random drawing of a graph and provides its Lombardi drawing, giving a proper visualization of the graph. | Transforming a random graph drawing into a Lombardi drawing | 4,565 |
This paper addresses the online exact string matching problem which consists in finding all occurrences of a given pattern p in a text t. It is an extensively studied problem in computer science, mainly due to its direct applications to such diverse areas as text, image and signal processing, speech analysis and recognition, data compression, information retrieval, computational biology and chemistry. Since 1970 more than 80 string matching algorithms have been proposed, and more than 50% of them in the last ten years. In this note we present a comprehensive list of all string matching algorithms and present experimental results in order to compare them from a practical point of view. From our experimental evaluation it turns out that the performance of the algorithms are quite different for different alphabet sizes and pattern length. | The Exact String Matching Problem: a Comprehensive Experimental
Evaluation | 4,566 |
We consider the minimum vertex cover problem in hypergraphs in which every hyperedge has size k (also known as minimum hitting set problem, or minimum set cover with element frequency k). Simple algorithms exist that provide k-approximations, and this is believed to be the best possible approximation achievable in polynomial time. We show how to exploit density and regularity properties of the input hypergraph to break this barrier. In particular, we provide a randomized polynomial-time algorithm with approximation factor k/(1 +(k-1)d/(k Delta)), where d and Delta are the average and maximum degree, respectively, and Delta must be Omega(n^{k-1}/log n). The proposed algorithm generalizes the recursive sampling technique of Imamura and Iwama (SODA'05) for vertex cover in dense graphs. As a corollary, we obtain an approximation factor k/(2-1/k) for subdense regular hypergraphs, which is shown to be the best possible under the unique games conjecture. | Approximating Vertex Cover in Dense Hypergraphs | 4,567 |
Bipartite Correlation clustering is the problem of generating a set of disjoint bi-cliques on a set of nodes while minimizing the symmetric difference to a bipartite input graph. The number or size of the output clusters is not constrained in any way. The best known approximation algorithm for this problem gives a factor of 11. This result and all previous ones involve solving large linear or semi-definite programs which become prohibitive even for modestly sized tasks. In this paper we present an improved factor 4 approximation algorithm to this problem using a simple combinatorial algorithm which does not require solving large convex programs. The analysis extends a method developed by Ailon, Charikar and Alantha in 2008, where a randomized pivoting algorithm was analyzed for obtaining a 3-approximation algorithm for Correlation Clustering, which is the same problem on graphs (not bipartite). The analysis for Correlation Clustering there required defining events for structures containing 3 vertices and using the probability of these events to produce a feasible solution to a dual of a certain natural LP bounding the optimal cost. It is tempting here to use sets of 4 vertices, which are the smallest structures for which contradictions arise for Bipartite Correlation Clustering. This simple idea, however, appears to be evasive. We show that, by modifying the LP, we can analyze algorithms which take into consideration subgraph structures of unbounded size. We believe our techniques are interesting in their own right, and may be used for other problems as well. | An Improved Algorithm for Bipartite Correlation Clustering | 4,568 |
Finding heavy-elements (heavy-hitters) in streaming data is one of the central, and well-understood tasks. Despite the importance of this problem, when considering the sliding windows model of streaming (where elements eventually expire) the problem of finding L_2-heavy elements has remained completely open despite multiple papers and considerable success in finding L_1-heavy elements. In this paper, we develop the first poly-logarithmic-memory algorithm for finding L_2-heavy elements in sliding window model. Since L_2 heavy elements play a central role for many fundamental streaming problems (such as frequency moments), we believe our method would be extremely useful for many sliding-windows algorithms and applications. For example, our technique allows us not only to find L_2-heavy elements, but also heavy elements with respect to any L_p for 0<p<2 on sliding windows. Thus, our paper completely resolves the question of finding L_p-heavy elements for sliding windows with poly-logarithmic memory for all values of p since it is well known that for p>2 this task is impossible. Our method may have other applications as well. We demonstrate a broader applicability of our novel yet simple method on two additional examples: we show how to obtain a sliding window approximation of other properties such as the similarity of two streams, or the fraction of elements that appear exactly a specified number of times within the window (the rarity problem). In these two illustrative examples of our method, we replace the current expected memory bounds with worst case bounds. | How to Catch L_2-Heavy-Hitters on Sliding Windows | 4,569 |
We consider the discrepancy problem of coloring $n$ intervals with $k$ colors such that at each point on the line, the maximal difference between the number of intervals of any two colors is minimal. Somewhat surprisingly, a coloring with maximal difference at most one always exists. Furthermore, we give an algorithm with running time $O(n \log n + kn \log k)$ for its construction. This is in particular interesting because many known results for discrepancy problems are non-constructive. This problem naturally models a load balancing scenario, where $n$ tasks with given start- and endtimes have to be distributed among $k$ servers. Our results imply that this can be done ideally balanced. When generalizing to $d$-dimensional boxes (instead of intervals), a solution with difference at most one is not always possible. We show that for any $d \ge 2$ and any $k \ge 2$ it is NP-complete to decide if such a solution exists, which implies also NP-hardness of the respective minimization problem. In an online scenario, where intervals arrive over time and the color has to be decided upon arrival, the maximal difference in the size of color classes can become arbitrarily high for any online algorithm. | Balanced Interval Coloring | 4,570 |
The 2010 Workshop on Algorithms for Modern Massive Data Sets (MMDS 2010) was held at Stanford University, June 15--18. The goals of MMDS 2010 were (1) to explore novel techniques for modeling and analyzing massive, high-dimensional, and nonlinearly-structured scientific and Internet data sets; and (2) to bring together computer scientists, statisticians, applied mathematicians, and data analysis practitioners to promote cross-fertilization of ideas. MMDS 2010 followed on the heels of two previous MMDS workshops. The first, MMDS 2006, addressed the complementary perspectives brought by the numerical linear algebra and theoretical computer science communities to matrix algorithms in modern informatics applications; and the second, MMDS 2008, explored more generally fundamental algorithmic and statistical challenges in modern large-scale data analysis. | Computation in Large-Scale Scientific and Internet Data Applications is
a Focus of MMDS 2010 | 4,571 |
The suffix tree is a very important data structure in string processing, but it suffers from a huge space consumption. In large-scale applications, compressed suffix trees (CSTs) are therefore used instead. A CST consists of three (compressed) components: the suffix array, the LCP-array, and data structures for simulating navigational operations on the suffix tree. The LCP-array stores the lengths of the longest common prefixes of lexicographically adjacent suffixes, and it can be computed in linear time. In this paper, we present new LCP-array construction algorithms that are fast and very space efficient. In practice, our algorithms outperform the currently best algorithms. | Lightweight LCP-Array Construction in Linear Time | 4,572 |
We present new theoretical results on differentially private data release useful with respect to any target class of counting queries, coupled with experimental results on a variety of real world data sets. Specifically, we study a simple combination of the multiplicative weights approach of [Hardt and Rothblum, 2010] with the exponential mechanism of [McSherry and Talwar, 2007]. The multiplicative weights framework allows us to maintain and improve a distribution approximating a given data set with respect to a set of counting queries. We use the exponential mechanism to select those queries most incorrectly tracked by the current distribution. Combing the two, we quickly approach a distribution that agrees with the data set on the given set of queries up to small error. The resulting algorithm and its analysis is simple, but nevertheless improves upon previous work in terms of both error and running time. We also empirically demonstrate the practicality of our approach on several data sets commonly used in the statistical community for contingency table release. | A simple and practical algorithm for differentially private data release | 4,573 |
For a given collection G of directed graphs we define the join-reachability graph of G, denoted by J(G), as the directed graph that, for any pair of vertices a and b, contains a path from a to b if and only if such a path exists in all graphs of G. Our goal is to compute an efficient representation of J(G). In particular, we consider two versions of this problem. In the explicit version we wish to construct the smallest join-reachability graph for G. In the implicit version we wish to build an efficient data structure (in terms of space and query time) such that we can report fast the set of vertices that reach a query vertex in all graphs of G. This problem is related to the well-studied reachability problem and is motivated by emerging applications of graph-structured databases and graph algorithms. We consider the construction of join-reachability structures for two graphs and develop techniques that can be applied to both the explicit and the implicit problem. First we present optimal and near-optimal structures for paths and trees. Then, based on these results, we provide efficient structures for planar graphs and general directed graphs. | Join-Reachability Problems in Directed Graphs | 4,574 |
Consider the following problem: given a set system (U,I) and an edge-weighted graph G = (U, E) on the same universe U, find the set A in I such that the Steiner tree cost with terminals A is as large as possible: "which set in I is the most difficult to connect up?" This is an example of a max-min problem: find the set A in I such that the value of some minimization (covering) problem is as large as possible. In this paper, we show that for certain covering problems which admit good deterministic online algorithms, we can give good algorithms for max-min optimization when the set system I is given by a p-system or q-knapsacks or both. This result is similar to results for constrained maximization of submodular functions. Although many natural covering problems are not even approximately submodular, we show that one can use properties of the online algorithm as a surrogate for submodularity. Moreover, we give stronger connections between max-min optimization and two-stage robust optimization, and hence give improved algorithms for robust versions of various covering problems, for cases where the uncertainty sets are given by p-systems and q-knapsacks. | Robust and MaxMin Optimization under Matroid and Knapsack Uncertainty
Sets | 4,575 |
A problem studied in Systems Biology is how to find shortest paths in metabolic networks. Unfortunately, simple (i.e., graph theoretic) shortest paths do not properly reflect biochemical facts. An approach to overcome this issue is to use edge labels and search for paths with distinct labels. In this paper, we show that such biologically feasible shortest paths are hard to compute. Moreover, we present solutions to find such paths in networks in reasonable time. | Shortest Paths with Pairwise-Distinct Edge Labels: Finding Biochemical
Pathways in Metabolic Networks | 4,576 |
We propose a new method for defragmenting the module layout of a reconfigurable device, enabled by a novel approach for dealing with communication needs between relocated modules and with inhomogeneities found in commonly used FPGAs. Our method is based on dynamic relocation of module positions during runtime, with only very little reconfiguration overhead; the objective is to maximize the length of contiguous free space that is available for new modules. We describe a number of algorithmic aspects of good defragmentation, and present an optimization method based on tabu search. Experimental results indicate that we can improve the quality of module layout by roughly 50 % over static layout. Among other benefits, this improvement avoids unnecessary rejections of modules | No-Break Dynamic Defragmentation of Reconfigurable Devices | 4,577 |
Let G be an edge-weighted hypergraph on n vertices, m edges of size \le s, where the edges have real weights in an interval [1,W]. We show that if we can approximate a maximum weight matching in G within factor alpha in time T(n,m,W) then we can find a matching of weight at least (alpha-epsilon) times the maximum weight of a matching in G in time (epsilon^{-1})^{O(1)}max_{1\le q \le O(epsilon \frac {log {\frac n {epsilon}}} {log epsilon^{-1}})} max_{m_1+...m_q=m} sum_1^qT(min{n,sm_j},m_{j},(epsilon^{-1})^{O(epsilon^{-1})}). In particular, if we combine our result with the recent (1-\epsilon)-approximation algorithm for maximum weight matching in graphs due to Duan and Pettie whose time complexity has a poly-logarithmic dependence on W then we obtain a (1-\epsilon)-approximation algorithm for maximum weight matching in graphs running in time (epsilon^{-1})^{O(1)}(m+n). | Near approximation of maximum weight matching through efficient weight
reduction | 4,578 |
Suppose that each member of a set of agents has a preference list of a subset of houses, possibly involving ties and each agent and house has their capacity denoting the maximum number of correspondingly agents/houses that can be matched to him/her/it. We want to find a matching $M$, for which there is no other matching $M'$ such that more agents prefer $M'$ to $M$ than $M$ to $M'$. (What it means that an agent prefers one matching to the other is explained in the paper.) Popular matchings have been studied quite extensively, especially in the one-to-one setting. We provide a characterization of popular b-matchings for two defintions of popularity, show some $NP$-hardness results and for certain versions describe polynomial algorithms. | Popular b-matchings | 4,579 |
In this paper I have study to Reduce the time Complexity of Earliest Deadline First (EDF), a global scheduling scheme for Earliest Deadline First in Real Time System tasks on a Multiprocessors system. Several admission control algorithms for Earliest Deadline First (EDF) are presented, both for hard and soft real-time tasks. The average performance of these admission control algorithms is compared with the performance of known partitioning schemes. I have applied some modification to the global Earliest Deadline First (EDF) algorithms to decrease the number of task migration and also to add predictability to its behavior. The Aim of this work is to provide a sensitivity analysis for task deadline context of multiprocessor system by using a new approach of EFDF (Earliest Feasible Deadline First) algorithm. In order to decrease the number of migrations we prevent a job from moving one processor to another processor if it is among the m higher priority jobs. Therefore, a job will continue its execution on the same processor if possible (processor affinity). The result of these comparisons outlines some situations where one scheme is preferable over the other. Partitioning schemes are better suited for hard real-time systems, while a global scheme is preferable for soft real-time systems. | An Algorithm to Reduce the Time Complexity of Earliest Deadline First
Scheduling Algorithm in Real-Time System | 4,580 |
Practical data structures for the edit-sensitive parsing (ESP) are proposed. Given a string S, its ESP tree is equivalent to a context-free grammar G generating just S, which is represented by a DAG. Using the succinct data structures for trees and permutations, G is decomposed to two LOUDS bit strings and single array in (1+\epsilon)n\log n+4n+o(n) bits for any 0<\epsilon <1 and the number n of variables in G. The time to count occurrences of P in S is in O(\frac{1}{\epsilon}(m\log n+occ_c(\log m\log u)), whereas m = |P|, u = |S|, and occ_c is the number of occurrences of a maximal common subtree in ESPs of P and S. The efficiency of the proposed index is evaluated by the experiments conducted on several benchmarks complying with the other compressed indexes. | A Searchable Compressed Edit-Sensitive Parsing | 4,581 |
In a scheduling game, each player owns a job and chooses a machine to execute it. While the social cost is the maximal load over all machines (makespan), the cost (disutility) of each player is the completion time of its own job. In the game, players may follow selfish strategies to optimize their cost and therefore their behaviors do not necessarily lead the game to an equilibrium. Even in the case there is an equilibrium, its makespan might be much larger than the social optimum, and this inefficiency is measured by the price of anarchy -- the worst ratio between the makespan of an equilibrium and the optimum. Coordination mechanisms aim to reduce the price of anarchy by designing scheduling policies that specify how jobs assigned to a same machine are to be scheduled. Typically these policies define the schedule according to the processing times as announced by the jobs. One could wonder if there are policies that do not require this knowledge, and still provide a good price of anarchy. This would make the processing times be private information and avoid the problem of truthfulness. In this paper we study these so-called non-clairvoyant policies. In particular, we study the RANDOM policy that schedules the jobs in a random order without preemption, and the EQUI policy that schedules the jobs in parallel using time-multiplexing, assigning each job an equal fraction of CPU time. | Non-clairvoyant Scheduling Games | 4,582 |
This paper studies the "explanation problem" for tree- and linearly-ordered array data, a problem motivated by database applications and recently solved for the one-dimensional tree-ordered case. In this paper, one is given a matrix A whose rows and columns have semantics: special subsets of the rows and special subsets of the columns are meaningful, others are not. A submatrix in A is said to be meaningful if and only if it is the cross product of a meaningful row subset and a meaningful column subset, in which case we call it an "allowed rectangle." The goal is to "explain" A as a sparse sum of weighted allowed rectangles. Specifically, we wish to find as few weighted allowed rectangles as possible such that, for all i,j, a_{ij} equals the sum of the weights of all rectangles which include cell (i,j). In this paper we consider the natural cases in which the matrix dimensions are tree-ordered or linearly-ordered. In the tree-ordered case, we are given a rooted tree T1 whose leaves are the rows of A and another, T2, whose leaves are the columns. Nodes of the trees correspond in an obvious way to the sets of their leaf descendants. In the linearly-ordered case, a set of rows or columns is meaningful if and only if it is contiguous. For tree-ordered data, we prove the explanation problem NP-Hard and give a randomized 2-approximation algorithm for it. For linearly-ordered data, we prove the explanation problem NP-Hard and give a 2.56-approximation algorithm. To our knowledge, these are the first results for the problem of sparsely and exactly representing matrices by weighted rectangles. | On Parsimonious Explanations for 2-D Tree- and Linearly-Ordered Data | 4,583 |
We study the problem of maximizing constrained non-monotone submodular functions and provide approximation algorithms that improve existing algorithms in terms of either the approximation factor or simplicity. Our algorithms combine existing local search and greedy based algorithms. Different constraints that we study are exact cardinality and multiple knapsack constraints. For the multiple-knapsack constraints we achieve a $(0.25-2\epsilon)$-factor algorithm. We also show, as our main contribution, how to use the continuous greedy process for non-monotone functions and, as a result, obtain a $0.13$-factor approximation algorithm for maximization over any solvable down-monotone polytope. The continuous greedy process has been previously used for maximizing smooth monotone submodular function over a down-monotone polytope \cite{CCPV08}. This implies a 0.13-approximation for several discrete problems, such as maximizing a non-negative submodular function subject to a matroid constraint and/or multiple knapsack constraints. | Maximizing Non-monotone Submodular Set Functions Subject to Different
Constraints: Combined Algorithms | 4,584 |
A new approach to the static route planning problem, based on a multi-staging concept and a \emph{scope} notion, is presented. The main goal (besides implied efficiency of planning) of our approach is to address---with a solid theoretical foundation---the following two practically motivated aspects: a \emph{route comfort} and a very \emph{limited storage} space of a small navigation device, which both do not seem to be among the chief objectives of many other studies. We show how our novel idea can tackle both these seemingly unrelated aspects at once, and may also contribute to other established route planning approaches with which ours can be naturally combined. We provide a theoretical proof that our approach efficiently computes exact optimal routes within this concept, as well as we demonstrate with experimental results on publicly available road networks of the US the good practical performance of the solution. | Multi-Stage Improved Route Planning Approach: theoretical foundations | 4,585 |
The PDPTW is an optimization vehicles routing problem which must meet requests for transport between suppliers and customers satisfying precedence, capacity and time constraints. We present, in this paper, a genetic algorithm for multi-objective optimization of a dynamic multi pickup and delivery problem with time windows (Dynamic m-PDPTW). We propose a brief literature review of the PDPTW, present our approach based on Pareto dominance method and lower bounds, to give a satisfying solution to the Dynamic m-PDPTW minimizing the compromise between total travel cost and total tardiness time. Computational results indicate that the proposed algorithm gives good results with a total tardiness equal to zero with a tolerable cost. | Multi-objective Optimization For The Dynamic Multi-Pickup and Delivery
Problem with Time Windows | 4,586 |
We show how to modify the linear-time construction algorithm for suffix arrays based on induced sorting (Nong et al., DCC'09) such that it computes the array of longest common prefixes (LCP-array) as well. Practical tests show that this outperforms recent LCP-array construction algorithms (Gog and Ohlebusch, ALENEX'11). | Inducing the LCP-Array | 4,587 |
Two multivehicle routing problems are considered in the framework that a visit to a location must take place during a specific time window in order to be counted and all time windows are the same length. In the first problem, the goal is to visit as many locations as possible using a fixed number of vehicles. In the second, the goal is to visit all locations using the smallest number of vehicles possible. For the first problem, we present an approximation algorithm whose output path collects a reward within a constant factor of optimal for any fixed number of vehicles. For the second problem, our algorithm finds a 6-approximation to the problem on a tree metric, whenever a single vehicle could visit all locations during their time windows. | Two Multivehicle Routing Problems with Unit-Time Windows | 4,588 |
A bicriteria approximation algorithm is presented for the unrooted traveling repairman problem, realizing increased profit in return for increased speedup of repairman motion. The algorithm generalizes previous results from the case in which all time windows are the same length to the case in which their lengths can range between l and 2. This analysis can extend to any range of time window lengths, following our earlier techniques. This relationship between repairman profit and speedup is applicable over a range of values that is dependent on the cost of putting the input in an especially desirable form, involving what are called "trimmed windows." For time windows with lengths between 1 and 2, the range of values for speedup $s$ for which our analysis holds is $1 \leq s \leq 6$. In this range, we establish an approximation ratio that is constant for any specific value of $s$. | Speedup in the Traveling Repairman Problem with Constrained Time Windows | 4,589 |
We introduce the first self-index based on the Lempel-Ziv 1977 compression format (LZ77). It is particularly competitive for highly repetitive text collections such as sequence databases of genomes of related species, software repositories, versioned document collections, and temporal text databases. Such collections are extremely compressible but classical self-indexes fail to capture that source of compressibility. Our self-index takes in practice a few times the space of the text compressed with LZ77 (as little as 2.6 times), extracts 1--2 million characters of the text per second, and finds patterns at a rate of 10--50 microseconds per occurrence. It is smaller (up to one half) than the best current self-index for repetitive collections, and faster in many cases. | Self-Index Based on LZ77 | 4,590 |
A mode of a multiset $S$ is an element $a \in S$ of maximum multiplicity; that is, $a$ occurs at least as frequently as any other element in $S$. Given a list $A[1:n]$ of $n$ items, we consider the problem of constructing a data structure that efficiently answers range mode queries on $A$. Each query consists of an input pair of indices $(i, j)$ for which a mode of $A[i:j]$ must be returned. We present an $O(n^{2-2\epsilon})$-space static data structure that supports range mode queries in $O(n^\epsilon)$ time in the worst case, for any fixed $\epsilon \in [0,1/2]$. When $\epsilon = 1/2$, this corresponds to the first linear-space data structure to guarantee $O(\sqrt{n})$ query time. We then describe three additional linear-space data structures that provide $O(k)$, $O(m)$, and $O(|j-i|)$ query time, respectively, where $k$ denotes the number of distinct elements in $A$ and $m$ denotes the frequency of the mode of $A$. Finally, we examine generalizing our data structures to higher dimensions. | Linear-Space Data Structures for Range Mode Query in Arrays | 4,591 |
We study the setting in which the bits of an unknown infinite binary sequence x are revealed sequentially to an observer. We show that very limited assumptions about x allow one to make successful predictions about unseen bits of x. First, we study the problem of successfully predicting a single 0 from among the bits of x. In our model we have only one chance to make a prediction, but may do so at a time of our choosing. We describe and motivate this as the problem of a frog who wants to cross a road safely. Letting N_t denote the number of 1s among the first t bits of x, we say that x is "eps-weakly sparse" if lim inf (N_t/t) <= eps. Our main result is a randomized algorithm that, given any eps-weakly sparse sequence x, predicts a 0 of x with success probability as close as desired to 1 - \eps. Thus we can perform this task with essentially the same success probability as under the much stronger assumption that each bit of x takes the value 1 independently with probability eps. We apply this result to show how to successfully predict a bit (0 or 1) under a broad class of possible assumptions on the sequence x. The assumptions are stated in terms of the behavior of a finite automaton M reading the bits of x. We also propose and solve a variant of the well-studied "ignorant forecasting" problem. For every eps > 0, we give a randomized forecasting algorithm S_eps that, given sequential access to a binary sequence x, makes a prediction of the form: "A p fraction of the next N bits will be 1s." (The algorithm gets to choose p, N, and the time of the prediction.) For any fixed sequence x, the forecast fraction p is accurate to within +-eps with probability 1 - eps. | High-Confidence Predictions under Adversarial Uncertainty | 4,592 |
We develop a technique that we call Conflict Packing in the context of kernelization, obtaining (and improving) several polynomial kernels for editing problems on dense instances. We apply this technique on several well-studied problems: Feedback Arc Set in (Bipartite) Tournaments, Dense Rooted Triplet Inconsistency and Betweenness in Tournaments. For the former, one is given a (bipartite) tournament $T = (V,A)$ and seeks a set of at most $k$ arcs whose reversal in $T$ results in an acyclic (bipartite) tournament. While a linear vertex-kernel is already known for the first problem, using the Conflict Packing allows us to find a so-called safe partition, the central tool of the kernelization algorithm in, with simpler arguments. For the case of bipartite tournaments, the same technique allows us to obtain a quadratic vertex-kernel. Again, such a kernel was already known to exist, using the concept of so-called bimodules. We believe however that providing an unifying technique to cope with such problems is interesting. Regarding Dense Rooted Triplet Inconsistency, one is given a set of vertices $V$ and a dense collection $\mathcal{R}$ of rooted binary trees over three vertices of $V$ and seeks a rooted tree over $V$ containing all but at most $k$ triplets from $\mathcal{R}$. As a main consequence of our technique, we prove that the Dense Rooted Triplet Inconsistency problem admits a linear vertex-kernel. This result improves the best known bound of $O(k^2)$ vertices for this problem. Finally, we use this technique to obtain a linear vertex-kernel for Betweenness in Tournaments, where one is given a set of vertices $V$ and a dense collection $\mathcal{R}$ of so-called betweenness triplets and seeks a linear ordering of the vertices containing all but at most $k$ triplets from $\mathcal{R}$. | Conflict Packing: an unifying technique to obtain polynomial kernels for
editing problems on dense instances | 4,593 |
In this paper we consider methods for dynamically storing a set of different objects ("modules") in a physical array. Each module requires one free contiguous subinterval in order to be placed. Items are inserted or removed, resulting in a fragmented layout that makes it harder to insert further modules. It is possible to relocate modules, one at a time, to another free subinterval that is contiguous and does not overlap with the current location of the module. These constraints clearly distinguish our problem from classical memory allocation. We present a number of algorithmic results, including a bound of Theta(n^2) on physical sorting if there is a sufficiently large free space and sum up NP-hardness results for arbitrary initial layouts. For online scenarios in which modules arrive one at a time, we present a method that requires O(1) moves per insertion or deletion and amortized cost O(m_i log M) per insertion or deletion, where m_i is the module's size, M is the size of the largest module and costs for moves are linear in the size of a module. | Maintaining Arrays of Contiguous Objects | 4,594 |
The problem of storing a set of strings --- a string dictionary --- in compact form appears naturally in many cases. While classically it has represented a small part of the whole data to be processed (e.g., for Natural Language processing or for indexing text collections), more recent applications in Web engines, Web mining, RDF graphs, Internet routing, Bioinformatics, and many others, make use of very large string dictionaries, whose size is a significant fraction of the whole data. Thus novel approaches to compress them efficiently are necessary. In this paper we experimentally compare time and space performance of some existing alternatives, as well as new ones we propose. We show that space reductions of up to 20% of the original size of the strings is possible while supporting fast dictionary searches. | Compressed String Dictionaries | 4,595 |
We consider the complexity of problems related to the combinatorial game Free-Flood-It, in which players aim to make a coloured graph monochromatic with the minimum possible number of flooding operations. Our main result is that computing the length of an optimal sequence is fixed parameter tractable (with the number of colours present as a parameter) when restricted to rectangular 2xn boards. We also show that, when the number of colours is unbounded, the problem remains NP-hard on such boards. This resolves a question of Clifford, Jalsenius, Montanaro and Sach (2010). | The complexity of Free-Flood-It on 2xn boards | 4,596 |
We consider the complexity of problems related to the combinatorial game Free-Flood-It, in which players aim to make a coloured graph monochromatic with the minimum possible number of flooding operations. Although computing the minimum number of moves required to flood an arbitrary graph is known to be NP-hard, we demonstrate a polynomial time algorithm to compute the minimum number of moves required to link each pair of vertices. We apply this result to compute in polynomial time the minimum number of moves required to flood a path, and an additive approximation to this quantity for an arbitrary k x n board, coloured with a bounded number of colours, for any fixed k. On the other hand, we show that, for k>=3, determining the minimum number of moves required to flood a k x n board coloured with at least four colours remains NP-hard. | The complexity of flood-filling games on graphs | 4,597 |
Dynamic graphs have emerged as an appropriate model to capture the changing nature of many modern networks, such as peer-to-peer overlays and mobile ad hoc networks. Most of the recent research on dynamic networks has only addressed the undirected dynamic graph model. However, realistic networks such as the ones identified above are directed. In this paper we present early work in addressing the properties of directed dynamic graphs. In particular, we explore the problem of random walk in such graphs. We assume the existence of an oblivious adversary that makes arbitrary changes in every communication round. We explore the problem of covering the dynamic graph, that even in the static case can be exponential, and we establish an upper bound O(d_max n^3 log^2 n) of the cover time for balanced dynamic graphs. | Random Walk on Directed Dynamic Graphs | 4,598 |
A binary matrix satisfies the consecutive ones property (COP) if its columns can be permuted such that the ones in each row of the resulting matrix are consecutive. Equivalently, a family of sets F = {Q_1,..,Q_m}, where Q_i is subset of R for some universe R, satisfies the COP if the symbols in R can be permuted such that the elements of each set Q_i occur consecutively, as a contiguous segment of the permutation of R's symbols. We consider the COP version on multisets and prove that counting its solutions is difficult (#P-complete). We prove completeness results also for counting the frontiers of PQ-trees, which are typically used for testing the COP on sets, thus showing that a polynomial algorithm is unlikely to exist when dealing with multisets. We use a combinatorial approach based on parsimonious reductions from the Hamiltonian path problem, showing that the decisional version of our problems is therefore NP-complete. | Consecutive Ones Property and PQ-Trees for Multisets: Hardness of
Counting Their Orderings | 4,599 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.