text
stringlengths
17
3.36M
source
stringlengths
3
333
__index_level_0__
int64
0
518k
In the k-median problem we are given sets of facilities and customers, and distances between them. For a given set F of facilities, the cost of serving a customer u is the minimum distance between u and a facility in F. The goal is to find a set F of k facilities that minimizes the sum, over all customers, of their service costs. Following Mettu and Plaxton, we study the incremental medians problem, where k is not known in advance, and the algorithm produces a nested sequence of facility sets where the kth set has size k. The algorithm is c-cost-competitive if the cost of each set is at most c times the cost of the optimum set of size k. We give improved incremental algorithms for the metric version: an 8-cost-competitive deterministic algorithm, a 2e ~ 5.44-cost-competitive randomized algorithm, a (24+epsilon)-cost-competitive, poly-time deterministic algorithm, and a (6e+epsilon ~ .31)-cost-competitive, poly-time randomized algorithm. The algorithm is s-size-competitive if the cost of the kth set is at most the minimum cost of any set of size k, and has size at most s k. The optimal size-competitive ratios for this problem are 4 (deterministic) and e (randomized). We present the first poly-time O(log m)-size-approximation algorithm for the offline problem and first poly-time O(log m)-size-competitive algorithm for the incremental problem. Our proofs reduce incremental medians to the following online bidding problem: faced with an unknown threshold T, an algorithm submits "bids" until it submits a bid that is at least the threshold. It pays the sum of all its bids. We prove that folklore algorithms for online bidding are optimally competitive.
Incremental Medians via Online Bidding
4,000
The Reverse Greedy algorithm (RGreedy) for the k-median problem works as follows. It starts by placing facilities on all nodes. At each step, it removes a facility to minimize the resulting total distance from the customers to the remaining facilities. It stops when k facilities remain. We prove that, if the distance function is metric, then the approximation ratio of RGreedy is between ?(log n/ log log n) and O(log n).
The reverse greedy algorithm for the metric k-median problem
4,001
We introduce a new class of non-standard variable-length codes, called adaptive codes. This class of codes associates a variable-length codeword to the symbol being encoded depending on the previous symbols in the input data string. An efficient algorithm for constructing adaptive codes of order one is presented. Then, we introduce a natural generalization of adaptive codes, called GA codes.
Adaptive Codes: A New Class of Non-standard Variable-length Codes
4,002
General wisdom is, mathematical operation is needed to generate number by numbers. It is pointed out that without any mathematical operation true random numbers can be generated by numbers through algorithmic process. It implies that human brain itself is a living true random number generator. Human brain can meet the enormous human demand of true random numbers.
Human being is a living random number generator
4,003
We study the multiple-precision addition of two positive floating-point numbers in base 2, with exact rounding, as specified in the MPFR library, i.e. where each number has its own precision. We show how the best possible complexity (up to a constant factor that depends on the implementation) can be obtain.
The Generic Multiple-Precision Floating-Point Addition With Exact Rounding (as in the MPFR Library)
4,004
We study practically efficient methods for performing combinatorial group testing. We present efficient non-adaptive and two-stage combinatorial group testing algorithms, which identify the at most d items out of a given set of n items that are defective, using fewer tests for all practical set sizes. For example, our two-stage algorithm matches the information theoretic lower bound for the number of tests in a combinatorial group testing regimen.
Improved Combinatorial Group Testing Algorithms for Real-World Problem Sizes
4,005
Adaptive variable-length codes associate a variable-length codeword to the symbol being encoded depending on the previous symbols in the input string. This class of codes has been recently presented in [Dragos Trinca, arXiv:cs.DS/0505007] as a new class of non-standard variable-length codes. New algorithms for data compression, based on adaptive variable-length codes of order one and Huffman's algorithm, have been recently presented in [Dragos Trinca, ITCC 2004]. In this paper, we extend the work done so far by the following contributions: first, we propose an improved generalization of these algorithms, called EAHn. Second, we compute the entropy bounds for EAHn, using the well-known bounds for Huffman's algorithm. Third, we discuss implementation details and give reports of experimental results obtained on some well-known corpora. Finally, we describe a parallel version of EAHn using the PRAM model of computation.
EAH: A New Encoder based on Adaptive Variable-length Codes
4,006
In this paper, a sorting technique is presented that takes as input a data set whose primary key domain is known to the sorting algorithm, and works with an time efficiency of O(n+k), where k is the primary key domain. It is shown that the algorithm has applicability over a wide range of data sets. Later, a parallel formulation of the same is proposed and its effectiveness is argued. Though this algorithm is applicable over a wide range of general data sets, it finds special application (much superior to others) in places where sorting information that arrives in parts and in cases where input data is huge in size.
Decision Sort and its Parallel Implementation
4,007
A coloring of a tree is convex if the vertices that pertain to any color induce a connected subtree; a partial coloring (which assigns colors to some of the vertices) is convex if it can be completed to a convex (total) coloring. Convex coloring of trees arise in areas such as phylogenetics, linguistics, etc. eg, a perfect phylogenetic tree is one in which the states of each character induce a convex coloring of the tree. Research on perfect phylogeny is usually focused on finding a tree so that few predetermined partial colorings of its vertices are convex. When a coloring of a tree is not convex, it is desirable to know "how far" it is from a convex one. In [19], a natural measure for this distance, called the recoloring distance was defined: the minimal number of color changes at the vertices needed to make the coloring convex. This can be viewed as minimizing the number of "exceptional vertices" w.r.t. to a closest convex coloring. The problem was proved to be NP-hard even for colored string. In this paper we continue the work of [19], and present a 2-approximation algorithm of convex recoloring of strings whose running time O(cn), where c is the number of colors and n is the size of the input, and an O(cn^2)-time 3-approximation algorithm for convex recoloring of trees.
Efficient Approximation of Convex Recolorings
4,008
We give the first sorting algorithm with bounds in terms of higher-order entropies: let $S$ be a sequence of length $m$ containing $n$ distinct elements and let (H_\ell (S)) be the $\ell$th-order empirical entropy of $S$, with (n^{\ell + 1} \log n \in O (m)); our algorithm sorts $S$ using ((H_\ell (S) + O (1)) m) comparisons.
Sorting a Low-Entropy Sequence
4,009
An explicit algorithm is presented for testing whether two non-directed graphs are isomorphic or not. It is shown that for a graph of n vertices, the number of n independent operations needed for the test is polynomial in n. A proof that the algorithm actually performs the test is presented.
Isomorphism of graphs-a polynomial test
4,010
In this paper we describe a new algorithm for buffered global routing according to a prescribed buffer site map. Specifically, we describe a provably good multi-commodity flow based algorithm that finds a global routing minimizing buffer and wire congestion subject to given constraints on routing area (wirelength and number of buffers) and sink delays. Our algorithm allows computing the tradeoff curve between routing area and wire/buffer congestion under any combination of delay and capacity constraints, and simultaneously performs buffer/wire sizing, as well as layer and pin assignment. Experimental results show that near-optimal results are obtained with a practical runtime.
Multicommodity Flow Algorithms for Buffered Global Routing
4,011
In 1994, Burrows and Wheeler developed a data compression algorithm which performs significantly better than Lempel-Ziv based algorithms. Since then, a lot of work has been done in order to improve their algorithm, which is based on a reversible transformation of the input string, called BWT (the Burrows-Wheeler transformation). In this paper, we propose a compression scheme based on BWT, MTF (move-to-front coding), and a version of the algorithms presented in [Dragos Trinca, ITCC-2004].
High-performance BWT-based Encoders
4,012
The well-known Eulerian path problem can be solved in polynomial time (more exactly, there exists a linear time algorithm for this problem). In this paper, we model the problem using a string matching framework, and then initiate an algorithmic study on a variant of this problem, called the (2,1)-STRING-MATCH problem (which is actually a generalization of the Eulerian path problem). Then, we present a polynomial-time algorithm for the (2,1)-STRING-MATCH problem, which is the most important result of this paper. Specifically, we get a lower bound of Omega(n), and an upper bound of O(n^{2}).
Modelling the Eulerian Path Problem using a String Matching Framework
4,013
Adaptive codes associate variable-length codewords to symbols being encoded depending on the previous symbols in the input data string. This class of codes has been introduced in [Dragos Trinca, cs.DS/0505007] as a new class of non-standard variable-length codes. New algorithms for data compression, based on adaptive codes of order one, have been presented in [Dragos Trinca, ITCC-2004], where we have behaviorally shown that for a large class of input data strings, these algorithms substantially outperform the Lempel-Ziv universal data compression algorithm. EAH has been introduced in [Dragos Trinca, cs.DS/0505061], as an improved generalization of these algorithms. In this paper, we present a translation of the EAH algorithm into the graph theory.
Modelling the EAH Data Compression Algorithm using Graph Theory
4,014
Adaptive codes have been introduced in [Dragos Trinca, cs.DS/0505007] as a new class of non-standard variable-length codes. These codes associate variable-length codewords to symbols being encoded depending on the previous symbols in the input data string. A new data compression algorithm, called EAH, has been introduced in [Dragos Trinca, cs.DS/0505061], where we have behaviorally shown that for a large class of input data strings, this algorithm substantially outperforms the well-known Lempel-Ziv universal data compression algorithm. In this paper, we translate the EAH encoder into automata theory.
Translating the EAH Data Compression Algorithm into Automata Theory
4,015
This article introduces an adaptive sorting algorithm that can relocate elements accurately by substituting their values into a function which we name it the guessing function. We focus on building this function which is the mapping relationship between record values and their corresponding sorted locations essentially. The time complexity of this algorithm O(n),when records distributed uniformly. Additionally, similar approach can be used in the searching algorithm.
A Sorting Algorithm Based on Calculation
4,016
Starting with a set of weighted items, we want to create a generic sample of a certain size that we can later use to estimate the total weight of arbitrary subsets. For this purpose, we propose priority sampling which tested on Internet data performed better than previous methods by orders of magnitude. Priority sampling is simple to define and implement: we consider a steam of items i=0,...,n-1 with weights w_i. For each item i, we generate a random number r_i in (0,1) and create a priority q_i=w_i/r_i. The sample S consists of the k highest priority items. Let t be the (k+1)th highest priority. Each sampled item i in S gets a weight estimate W_i=max{w_i,t}, while non-sampled items get weight estimate W_i=0. Magically, it turns out that the weight estimates are unbiased, that is, E[W_i]=w_i, and by linearity of expectation, we get unbiased estimators over any subset sum simply by adding the sampled weight estimates from the subset. Also, we can estimate the variance of the estimates, and surpricingly, there is no co-variance between different weight estimates W_i and W_j. We conjecture an extremely strong near-optimality; namely that for any weight sequence, there exists no specialized scheme for sampling k items with unbiased estimators that gets smaller total variance than priority sampling with k+1 items. Very recently Mario Szegedy has settled this conjecture.
Sampling to estimate arbitrary subset sums
4,017
We here study Max Hamming XSAT, ie, the problem of finding two XSAT models at maximum Hamming distance. By using a recent XSAT solver as an auxiliary function, an O(1.911^n) time algorithm can be constructed, where n is the number of variables. This upper time bound can be further improved to O(1.8348^n) by introducing a new kind of branching, more directly suited for finding models at maximum Hamming distance. The techniques presented here are likely to be of practical use as well as of theoretical value, proving that there are non-trivial algorithms for maximum Hamming distance problems.
Algorithms for Max Hamming Exact Satisfiability
4,018
We study 4 problems in string matching, namely, regular expression matching, approximate regular expression matching, string edit distance, and subsequence indexing, on a standard word RAM model of computation that allows logarithmic-sized words to be manipulated in constant time. We show how to improve the space and/or remove a dependency on the alphabet size for each problem using either an improved tabulation technique of an existing algorithm or by combining known algorithms in a new way.
Fast and Compact Regular Expression Matching
4,019
In this paper we study the problem of finding the approximate nearest neighbor of a query point in the high dimensional space, focusing on the Euclidean space. The earlier approaches use locality-preserving hash functions (that tend to map nearby points to the same value) to construct several hash tables to ensure that the query point hashes to the same bucket as its nearest neighbor in at least one table. Our approach is different -- we use one (or a few) hash table and hash several randomly chosen points in the neighborhood of the query point showing that at least one of them will hash to the bucket containing its nearest neighbor. We show that the number of randomly chosen points in the neighborhood of the query point $q$ required depends on the entropy of the hash value $h(p)$ of a random point $p$ at the same distance from $q$ at its nearest neighbor, given $q$ and the locality preserving hash function $h$ chosen randomly from the hash family. Precisely, we show that if the entropy $I(h(p)|q,h) = M$ and $g$ is a bound on the probability that two far-off points will hash to the same bucket, then we can find the approximate nearest neighbor in $O(n^\rho)$ time and near linear $\tilde O(n)$ space where $\rho = M/\log(1/g)$. Alternatively we can build a data structure of size $\tilde O(n^{1/(1-\rho)})$ to answer queries in $\tilde O(d)$ time. By applying this analysis to the locality preserving hash functions in and adjusting the parameters we show that the $c$ nearest neighbor can be computed in time $\tilde O(n^\rho)$ and near linear space where $\rho \approx 2.06/c$ as $c$ becomes large.
Entropy based Nearest Neighbor Search in High Dimensions
4,020
In this paper, we study the two choice balls and bins process when balls are not allowed to choose any two random bins, but only bins that are connected by an edge in an underlying graph. We show that for $n$ balls and $n$ bins, if the graph is almost regular with degree $n^\epsilon$, where $\epsilon$ is not too small, the previous bounds on the maximum load continue to hold. Precisely, the maximum load is $\log \log n + O(1/\epsilon) + O(1)$. For general $\Delta$-regular graphs, we show that the maximum load is $\log\log n + O(\frac{\log n}{\log (\Delta/\log^4 n)}) + O(1)$ and also provide an almost matching lower bound of $\log \log n + \frac{\log n}{\log (\Delta \log n)}$. V{\"o}cking [Voc99] showed that the maximum bin size with $d$ choice load balancing can be further improved to $O(\log\log n /d)$ by breaking ties to the left. This requires $d$ random bin choices. We show that such bounds can be achieved by making only two random accesses and querying $d/2$ contiguous bins in each access. By grouping a sequence of $n$ bins into $2n/d$ groups, each of $d/2$ consecutive bins, if each ball chooses two groups at random and inserts the new ball into the least-loaded bin in the lesser loaded group, then the maximum load is $O(\log\log n/d)$ with high probability.
Balanced Allocation on Graphs
4,021
Pbit, besides its simplicity, is definitely the fastest list sorting algorithm. It considerably surpasses all already known methods. Among many advantages, it is stable, linear and be made to run in place. I will compare Pbit with algorithm described by Donald E. Knuth in the third volume of ''The Art of Computer Programming'' and other (QuickerSort, MergeSort) list sorting algorithms.
Pbit and other list sorting algorithms
4,022
The problem of clustering fingerprint vectors is an interesting problem in Computational Biology that has been proposed in (Figureroa et al. 2004). In this paper we show some improvements in closing the gaps between the known lower bounds and upper bounds on the approximability of some variants of the biological problem. Namely we are able to prove that the problem is APX-hard even when each fingerprint contains only two unknown position. Moreover we have studied some variants of the orginal problem, and we give two 2-approximation algorithm for the IECMV and OECMV problems when the number of unknown entries for each vector is at most a constant.
Approximating Clustering of Fingerprint Vectors with Missing Values
4,023
This paper deals with the problem of finding, for a given graph and a given natural number k, a subgraph of k nodes with a maximum number of edges. This problem is known as the k-cluster problem and it is NP-hard on general graphs as well as on chordal graphs. In this paper, it is shown that the k-cluster problem is solvable in polynomial time on interval graphs. In particular, we present two polynomial time algorithms for the class of proper interval graphs and the class of general interval graphs, respectively. Both algorithms are based on a matrix representation for interval graphs. In contrast to representations used in most of the previous work, this matrix representation does not make use of the maximal cliques in the investigated graph.
A polynomial algorithm for the k-cluster problem on interval graphs
4,024
Given two rooted, labeled trees $P$ and $T$ the tree path subsequence problem is to determine which paths in $P$ are subsequences of which paths in $T$. Here a path begins at the root and ends at a leaf. In this paper we propose this problem as a useful query primitive for XML data, and provide new algorithms improving the previously best known time and space bounds.
Matching Subsequences in Trees
4,025
We develop dynamic dictionaries on the word RAM that use asymptotically optimal space, up to constant factors, subject to insertions and deletions, and subject to supporting perfect-hashing queries and/or membership queries, each operation in constant time with high probability. When supporting only membership queries, we attain the optimal space bound of Theta(n lg(u/n)) bits, where n and u are the sizes of the dictionary and the universe, respectively. Previous dictionaries either did not achieve this space bound or had time bounds that were only expected and amortized. When supporting perfect-hashing queries, the optimal space bound depends on the range {1,2,...,n+t} of hashcodes allowed as output. We prove that the optimal space bound is Theta(n lglg(u/n) + n lg(n/(t+1))) bits when supporting only perfect-hashing queries, and it is Theta(n lg(u/n) + n lg(n/(t+1))) bits when also supporting membership queries. All upper bounds are new, as is the Omega(n lg(n/(t+1))) lower bound.
De Dictionariis Dynamicis Pauco Spatio Utentibus
4,026
We consider the problem of efficiently designing sets (codes) of equal-length DNA strings (words) that satisfy certain combinatorial constraints. This problem has numerous motivations including DNA computing and DNA self-assembly. Previous work has extended results from coding theory to obtain bounds on code size for new biologically motivated constraints and has applied heuristic local search and genetic algorithm techniques for code design. This paper proposes a natural optimization formulation of the DNA code design problem in which the goal is to design n strings that satisfy a given set of constraints while minimizing the length of the strings. For multiple sets of constraints, we provide high-probability algorithms that run in time polynomial in n and any given constraint parameters, and output strings of length within a constant factor of the optimal. To the best of our knowledge, this work is the first to consider this type of optimization problem in the context of DNA code design.
Randomized Fast Design of Short DNA Words
4,027
The competitive analysis fails to model locality of reference in the online paging problem. To deal with it, Borodin et. al. introduced the access graph model, which attempts to capture the locality of reference. However, the access graph model has a number of troubling aspects. The access graph has to be known in advance to the paging algorithm and the memory required to represent the access graph itself may be very large. In this paper we present truly online strongly competitive paging algorithms in the access graph model that do not have any prior information on the access sequence. We present both deterministic and randomized algorithms. The algorithms need only O(k log n) bits of memory, where k is the number of page slots available and n is the size of the virtual address space. I.e., asymptotically no more memory than needed to store the virtual address translation table. We also observe that our algorithms adapt themselves to temporal changes in the locality of reference. We model temporal changes in the locality of reference by extending the access graph model to the so called extended access graph model, in which many vertices of the graph can correspond to the same virtual page. We define a measure for the rate of change in the locality of reference in G denoted by Delta(G). We then show our algorithms remain strongly competitive as long as Delta(G) >= (1+ epsilon)k, and no truly online algorithm can be strongly competitive on a class of extended access graphs that includes all graphs G with Delta(G) >= k- o(k).
Truly Online Paging with Locality of Reference
4,028
A particle-swarm is a set of indivisible processing elements that traverse a network in order to perform a distributed function. This paper will describe a particular implementation of a particle-swarm that can simulate the behavior of the popular PageRank algorithm in both its {\it global-rank} and {\it relative-rank} incarnations. PageRank is compared against the particle-swarm method on artificially generated scale-free networks of 1,000 nodes constructed using a common gamma value, $\gamma = 2.5$. The running time of the particle-swarm algorithm is $O(|P|+|P|t)$ where $|P|$ is the size of the particle population and $t$ is the number of particle propagation iterations. The particle-swarm method is shown to be useful due to its ease of extension and running time.
Simulating Network Influence Algorithms Using Particle-Swarms: PageRank and PageRank-Priors
4,029
Tree decompositions were developed by Robertson and Seymour. Since then algorithms have been developed to solve intractable problems efficiently for graphs of bounded treewidth. In this paper we extend tree decompositions to allow cycles to exist in the decomposition graph; we call these new decompositions plane decompositions because we require that the decomposition graph be planar. First, we give some background material about tree decompositions and an overview of algorithms both for decompositions and for approximations of planar graphs. Then, we give our plane decomposition definition and an algorithm that uses this decomposition to approximate the size of the maximum independent set of the underlying graph in polynomial time.
Plane Decompositions as Tools for Approximation
4,030
We present a simple algorithm which maintains the topological order of a directed acyclic graph with n nodes under an online edge insertion sequence in O(n^{2.75}) time, independent of the number of edges m inserted. For dense DAGs, this is an improvement over the previous best result of O(min(m^{3/2} log(n), m^{3/2} + n^2 log(n)) by Katriel and Bodlaender. We also provide an empirical comparison of our algorithm with other algorithms for online topological sorting. Our implementation outperforms them on certain hard instances while it is still competitive on random edge insertion sequences leading to complete DAGs.
An O(n^{2.75}) algorithm for online topological ordering
4,031
In this paper, a new general decomposition theory inspired from modular graph decomposition is presented. Our main result shows that, within this general theory, most of the nice algorithmic tools developed for modular decomposition are still efficient. This theory not only unifies the usual modular decomposition generalisations such as modular decomposition of directed graphs or decomposition of 2-structures, but also star cutsets and bimodular decomposition. Our general framework provides a decomposition algorithm which improves the best known algorithms for bimodular decomposition.
Homogeneity vs. Adjacency: generalising some graph decomposition algorithms
4,032
In a previous paper we generalized the Knuth-Morris-Pratt (KMP) pattern matching algorithm and defined a non-conventional kind of RAM, the MP--RAMs (RAMS equipped with extra operations), and designed an O(n) on-line algorithm for solving the serial episode matching problem on MP--RAMs when there is only one single episode. We here give two extensions of this algorithm to the case when we search for several patterns simultaneously and compare them. More preciseley, given $q+1$ strings (a text $t$ of length $n$ and $q$ patterns $m\_1,...,m\_q$) and a natural number $w$, the {\em multiple serial episode matching problem} consists in finding the number of size $w$ windows of text $t$ which contain patterns $m\_1,...,m\_q$ as subsequences, i.e. for each $m\_i$, if $m\_i=p\_1,..., p\_k$, the letters $p\_1,..., p\_k$ occur in the window, in the same order as in $m\_i$, but not necessarily consecutively (they may be interleaved with other letters).} The main contribution is an algorithm solving this problem on-line in time $O(nq)$.
Multiple serial episode matching
4,033
In [11] we defined Inf-Datalog and characterized the fragments of Monadic inf-Datalog that have the same expressive power as Modal Logic (resp. $CTL$, alternation-free Modal $\mu$-calculus and Modal $\mu$-calculus). We study here the time and space complexity of evaluation of Monadic inf-Datalog programs on finite models. We deduce a new unified proof that model checking has 1. linear data and program complexities (both in time and space) for $CTL$ and alternation-free Modal $\mu$-calculus, and 2. linear-space (data and program) complexities, linear-time program complexity and polynomial-time data complexity for $L\mu\_k$ (Modal $\mu$-calculus with fixed alternation-depth at most $k$).}
Complexity of Monadic inf-datalog. Application to temporal logic
4,034
The {\em edit distance} between two ordered trees with vertex labels is the minimum cost of transforming one tree into the other by a sequence of elementary operations consisting of deleting and relabeling existing nodes, as well as inserting new nodes. In this paper, we present a worst-case $O(n^3)$-time algorithm for this problem, improving the previous best $O(n^3\log n)$-time algorithm~\cite{Klein}. Our result requires a novel adaptive strategy for deciding how a dynamic program divides into subproblems (which is interesting in its own right), together with a deeper understanding of the previous algorithms for the problem. We also prove the optimality of our algorithm among the family of \emph{decomposition strategy} algorithms--which also includes the previous fastest algorithms--by tightening the known lower bound of $\Omega(n^2\log^2 n)$~\cite{Touzet} to $\Omega(n^3)$, matching our algorithm's running time. Furthermore, we obtain matching upper and lower bounds of $\Theta(n m^2 (1 + \log \frac{n}{m}))$ when the two trees have different sizes $m$ and~$n$, where $m < n$.
An O(n^3)-Time Algorithm for Tree Edit Distance
4,035
Higher-dimensional orthogonal packing problems have a wide range of practical applications, including packing, cutting, and scheduling. Combining the use of our data structure for characterizing feasible packings with our new classes of lower bounds, and other heuristics, we develop a two-level tree search algorithm for solving higher-dimensional packing problems to optimality. Computational results are reported, including optimal solutions for all two--dimensional test problems from recent literature. This is the third in a series of articles describing new approaches to higher-dimensional packing; see cs.DS/0310032 and cs.DS/0402044.
An exact algorithm for higher-dimensional orthogonal packing
4,036
We introduces the umodules, a generalisation of the notion of graph module. The theory we develop captures among others undirected graphs, tournaments, digraphs, and $2-$structures. We show that, under some axioms, a unique decomposition tree exists for umodules. Polynomial-time algorithms are provided for: non-trivial umodule test, maximal umodule computation, and decomposition tree computation when the tree exists. Our results unify many known decomposition like modular and bi-join decomposition of graphs, and a new decomposition of tournaments.
Unifying two Graph Decompositions with Modular Decomposition
4,037
This paper addresses the problem of finding a B-term wavelet representation of a given discrete function $f \in \real^n$ whose distance from f is minimized. The problem is well understood when we seek to minimize the Euclidean distance between f and its representation. The first known algorithms for finding provably approximate representations minimizing general $\ell_p$ distances (including $\ell_\infty$) under a wide variety of compactly supported wavelet bases are presented in this paper. For the Haar basis, a polynomial time approximation scheme is demonstrated. These algorithms are applicable in the one-pass sublinear-space data stream model of computation. They generalize naturally to multiple dimensions and weighted norms. A universal representation that provides a provable approximation guarantee under all p-norms simultaneously; and the first approximation algorithms for bit-budget versions of the problem, known as adaptive quantization, are also presented. Further, it is shown that the algorithms presented here can be used to select a basis from a tree-structured dictionary of bases and find a B-term representation of the given function that provably approximates its best dictionary-basis representation.
Approximation algorithms for wavelet transform coding of data streams
4,038
We study the problem of preemptive scheduling n jobs with given release times on m identical parallel machines. The objective is to minimize the average flow time. We show that when all jobs have equal processing times then the problem can be solved in polynomial time using linear programming. Our algorithm can also be applied to the open-shop problem with release times and unit processing times. For the general case (when processing times are arbitrary), we show that the problem is unary NP-hard.
The Complexity of Mean Flow Time Scheduling Problems with Release Times
4,039
We consider offline scheduling algorithms that incorporate speed scaling to address the bicriteria problem of minimizing energy consumption and a scheduling metric. For makespan, we give linear-time algorithms to compute all non-dominated solutions for the general uniprocessor problem and for the multiprocessor problem when every job requires the same amount of work. We also show that the multiprocessor problem becomes NP-hard when jobs can require different amounts of work. For total flow, we show that the optimal flow corresponding to a particular energy budget cannot be exactly computed on a machine supporting arithmetic and the extraction of roots. This hardness result holds even when scheduling equal-work jobs on a uniprocessor. We do, however, extend previous work by Pruhs et al. to give an arbitrarily-good approximation for scheduling equal-work jobs on a multiprocessor.
Power-aware scheduling for makespan and flow
4,040
This paper presents a new functionality of the Automatic Differentiation (AD) tool Tapenade. Tapenade generates adjoint codes which are widely used for optimization or inverse problems. Unfortunately, for large applications the adjoint code demands a great deal of memory, because it needs to store a large set of intermediates values. To cope with that problem, Tapenade implements a sub-optimal version of a technique called checkpointing, which is a trade-off between storage and recomputation. Our long-term goal is to provide an optimal checkpointing strategy for every code, not yet achieved by any AD tool. Towards that goal, we first introduce modifications in Tapenade in order to give the user the choice to select the checkpointing strategy most suitable for their code. Second, we conduct experiments in real-size scientific codes in order to gather hints that help us to deduce an optimal checkpointing strategy. Some of the experimental results show memory savings up to 35% and execution time up to 90%.
Enabling user-driven Checkpointing strategies in Reverse-mode Automatic Differentiation
4,041
This paper presents scheduling algorithms for procrastinators, where the speed that a procrastinator executes a job increases as the due date approaches. We give optimal off-line scheduling policies for linearly increasing speed functions. We then explain the computational/numerical issues involved in implementing this policy. We next explore the online setting, showing that there exist adversaries that force any online scheduling policy to miss due dates. This impossibility result motivates the problem of minimizing the maximum interval stretch of any job; the interval stretch of a job is the job's flow time divided by the job's due date minus release time. We show that several common scheduling strategies, including the "hit-the-highest-nail" strategy beloved by procrastinators, have arbitrarily large maximum interval stretch. Then we give the "thrashing" scheduling policy and show that it is a \Theta(1) approximation algorithm for the maximum interval stretch.
Scheduling Algorithms for Procrastinators
4,042
Let (X,d_X) be an n-point metric space. We show that there exists a distribution D over non-contractive embeddings into trees f:X-->T such that for every x in X, the expectation with respect to D of the maximum over y in X of the ratio d_T(f(x),f(y)) / d_X(x,y) is at most C (log n)^2, where C is a universal constant. Conversely we show that the above quadratic dependence on log n cannot be improved in general. Such embeddings, which we call maximum gradient embeddings, yield a framework for the design of approximation algorithms for a wide range of clustering problems with monotone costs, including fault-tolerant versions of k-median and facility location.
Maximum gradient embeddings and monotone clustering
4,043
In this paper we revisit the classical regular expression matching problem, namely, given a regular expression $R$ and a string $Q$, decide if $Q$ matches one of the strings specified by $R$. Let $m$ and $n$ be the length of $R$ and $Q$, respectively. On a standard unit-cost RAM with word length $w \geq \log n$, we show that the problem can be solved in $O(m)$ space with the following running times: \begin{equation*} \begin{cases} O(n\frac{m \log w}{w} + m \log w) & \text{if $m > w$} \\ O(n\log m + m\log m) & \text{if $\sqrt{w} < m \leq w$} \\ O(\min(n+ m^2, n\log m + m\log m)) & \text{if $m \leq \sqrt{w}$.} \end{cases} \end{equation*} This improves the best known time bound among algorithms using $O(m)$ space. Whenever $w \geq \log^2 n$ it improves all known time bounds regardless of how much space is used.
New Algorithms for Regular Expression Matching
4,044
In some applications of matching, the structural or hierarchical properties of the two graphs being aligned must be maintained. The hierarchical properties are induced by the direction of the edges in the two directed graphs. These structural relationships defined by the hierarchy in the graphs act as a constraint on the alignment. In this paper, we formalize the above problem as the weighted alignment between two directed acyclic graphs. We prove that this problem is NP-complete, show several upper bounds for approximating the solution, and finally introduce polynomial time algorithms for sub-classes of directed acyclic graphs.
Weighted hierarchical alignment of directed acyclic graph
4,045
In this paper, we study online multidimensional bin packing problem when all items are hypercubes. Based on the techniques in one dimensional bin packing algorithm Super Harmonic by Seiden, we give a framework for online hypercube packing problem and obtain new upper bounds of asymptotic competitive ratios. For square packing, we get an upper bound of 2.1439, which is better than 2.24437. For cube packing, we also give a new upper bound 2.6852 which is better than 2.9421 by Epstein and van Stee.
Improved online hypercube packing
4,046
In this paper we establish a general algorithmic framework between bin packing and strip packing, with which we achieve the same asymptotic bounds by applying bin packing algorithms to strip packing. More precisely we obtain the following results: (1) Any offline bin packing algorithm can be applied to strip packing maintaining the same asymptotic worst-case ratio. Thus using FFD (MFFD) as a subroutine, we get a practical (simple and fast) algorithm for strip packing with an upper bound 11/9 (71/60). A simple AFPTAS for strip packing immediately follows. (2) A class of Harmonic-based algorithms for bin packing can be applied to online strip packing maintaining the same asymptotic competitive ratio. It implies online strip packing admits an upper bound of 1.58889 on the asymptotic competitive ratio, which is very close to the lower bound 1.5401 and significantly improves the previously best bound of 1.6910 and affirmatively answers an open question posed by Csirik et. al.
Strip Packing vs. Bin Packing
4,047
In this paper, we study the 3D strip packing problem in which we are given a list of 3-dimensional boxes and required to pack all of them into a 3-dimensional strip with length 1 and width 1 and unlimited height to minimize the height used. Our results are below: i) we give an approximation algorithm with asymptotic worst-case ratio 1.69103, which improves the previous best bound of $2+\epsilon$ by Jansen and Solis-Oba of SODA 2006; ii) we also present an asymptotic PTAS for the case in which all items have {\em square} bases.
New Upper Bounds on The Approximability of 3D Strip Packing
4,048
This paper develops a new method for recovering m-sparse signals that is simultaneously uniform and quick. We present a reconstruction algorithm whose run time, O(m log^2(m) log^2(d)), is sublinear in the length d of the signal. The reconstruction error is within a logarithmic factor (in m) of the optimal m-term approximation error in l_1. In particular, the algorithm recovers m-sparse signals perfectly and noisy signals are recovered with polylogarithmic distortion. Our algorithm makes O(m log^2 (d)) measurements, which is within a logarithmic factor of optimal. We also present a small-space implementation of the algorithm. These sketching techniques and the corresponding reconstruction algorithms provide an algorithmic dimension reduction in the l_1 norm. In particular, vectors of support m in dimension d can be linearly embedded into O(m log^2 d) dimensions with polylogarithmic distortion. We can reconstruct a vector from its low-dimensional sketch in time O(m log^2(m) log^2(d)). Furthermore, this reconstruction is stable and robust under small perturbations.
Algorithmic linear dimension reduction in the l_1 norm for sparse vectors
4,049
Given two rooted, ordered, and labeled trees $P$ and $T$ the tree inclusion problem is to determine if $P$ can be obtained from $T$ by deleting nodes in $T$. This problem has recently been recognized as an important query primitive in XML databases. Kilpel\"ainen and Mannila [\emph{SIAM J. Comput. 1995}] presented the first polynomial time algorithm using quadratic time and space. Since then several improved results have been obtained for special cases when $P$ and $T$ have a small number of leaves or small depth. However, in the worst case these algorithms still use quadratic time and space. Let $n_S$, $l_S$, and $d_S$ denote the number of nodes, the number of leaves, and the %maximum depth of a tree $S \in \{P, T\}$. In this paper we show that the tree inclusion problem can be solved in space $O(n_T)$ and time: O(\min(l_Pn_T, l_Pl_T\log \log n_T + n_T, \frac{n_Pn_T}{\log n_T} + n_{T}\log n_{T})). This improves or matches the best known time complexities while using only linear space instead of quadratic. This is particularly important in practical applications, such as XML databases, where the space is likely to be a bottleneck.
The Tree Inclusion Problem: In Linear Space and Faster
4,050
We present the \crprecis structure, that is a general-purpose, deterministic and sub-linear data structure for summarizing \emph{update} data streams. The \crprecis structure yields the \emph{first deterministic sub-linear space/time algorithms for update streams} for answering a variety of fundamental stream queries, such as, (a) point queries, (b) range queries, (c) finding approximate frequent items, (d) finding approximate quantiles, (e) finding approximate hierarchical heavy hitters, (f) estimating inner-products, (g) near-optimal $B$-bucket histograms, etc..
CR-precis: A deterministic summary structure for update data streams
4,051
We study the approximate string matching and regular expression matching problem for the case when the text to be searched is compressed with the Ziv-Lempel adaptive dictionary compression schemes. We present a time-space trade-off that leads to algorithms improving the previously known complexities for both problems. In particular, we significantly improve the space bounds, which in practical applications are likely to be a bottleneck.
Improved Approximate String Matching and Regular Expression Matching on Ziv-Lempel Compressed Texts
4,052
Rank/Select dictionaries are data structures for an ordered set $S \subset \{0,1,...,n-1\}$ to compute $\rank(x,S)$ (the number of elements in $S$ which are no greater than $x$), and $\select(i,S)$ (the $i$-th smallest element in $S$), which are the fundamental components of \emph{succinct data structures} of strings, trees, graphs, etc. In those data structures, however, only asymptotic behavior has been considered and their performance for real data is not satisfactory. In this paper, we propose novel four Rank/Select dictionaries, esp, recrank, vcode and sdarray, each of which is small if the number of elements in $S$ is small, and indeed close to $nH_0(S)$ ($H_0(S) \leq 1$ is the zero-th order \textit{empirical entropy} of $S$) in practice, and its query time is superior to the previous ones. Experimental results reveal the characteristics of our data structures and also show that these data structures are superior to existing implementations in both size and query time.
Practical Entropy-Compressed Rank/Select Dictionary
4,053
The running maximum-minimum (max-min) filter computes the maxima and minima over running windows of size w. This filter has numerous applications in signal processing and time series analysis. We present an easy-to-implement online algorithm requiring no more than 3 comparisons per element, in the worst case. Comparatively, no algorithm is known to compute the running maximum (or minimum) filter in 1.5 comparisons per element, in the worst case. Our algorithm has reduced latency and memory usage.
Streaming Maximum-Minimum Filter Using No More than Three Comparisons per Element
4,054
Lagrangian relaxation and approximate optimization algorithms have received much attention in the last two decades. Typically, the running time of these methods to obtain a $\epsilon$ approximate solution is proportional to $\frac{1}{\epsilon^2}$. Recently, Bienstock and Iyengar, following Nesterov, gave an algorithm for fractional packing linear programs which runs in $\frac{1}{\epsilon}$ iterations. The latter algorithm requires to solve a convex quadratic program every iteration - an optimization subroutine which dominates the theoretical running time. We give an algorithm for convex programs with strictly convex constraints which runs in time proportional to $\frac{1}{\epsilon}$. The algorithm does NOT require to solve any quadratic program, but uses gradient steps and elementary operations only. Problems which have strictly convex constraints include maximum entropy frequency estimation, portfolio optimization with loss risk constraints, and various computational problems in signal processing. As a side product, we also obtain a simpler version of Bienstock and Iyengar's result for general linear programming, with similar running time. We derive these algorithms using a new framework for deriving convex optimization algorithms from online game playing algorithms, which may be of independent interest.
Approximate Convex Optimization by Online Game Playing
4,055
In this paper we devise an extremely efficient fully dynamic distributed algorithm for maintaining sparse spanners. Our resuls also include the first fully dynamic centralized algorithm for the problem with non-trivial bounds for both incremental and decremental update. Finally, we devise a very efficient streaming algorithm for the problem.
A near-optimal fully dynamic distributed algorithm for maintaining sparse spanners
4,056
A new general decomposition theory inspired from modular graph decomposition is presented. This helps unifying modular decomposition on different structures, including (but not restricted to) graphs. Moreover, even in the case of graphs, the terminology ``module'' not only captures the classical graph modules but also allows to handle 2-connected components, star-cutsets, and other vertex subsets. The main result is that most of the nice algorithmic tools developed for modular decomposition of graphs still apply efficiently on our generalisation of modules. Besides, when an essential axiom is satisfied, almost all the important properties can be retrieved. For this case, an algorithm given by Ehrenfeucht, Gabow, McConnell and Sullivan 1994 is generalised and yields a very efficient solution to the associated decomposition problem.
Algorithmic Aspects of a General Modular Decomposition Theory
4,057
Given an undirected graph $G=(V,E)$ on $n$ vertices, $m$ edges, and an integer $t\ge 1$, a subgraph $(V,E_S)$, $E_S\subseteq E$ is called a $t$-spanner if for any pair of vertices $u,v \in V$, the distance between them in the subgraph is at most $t$ times the actual distance. We present streaming algorithms for computing a $t$-spanner of essentially optimal size-stretch trade offs for any undirected graph. Our first algorithm is for the classical streaming model and works for unweighted graphs only. The algorithm performs a single pass on the stream of edges and requires $O(m)$ time to process the entire stream of edges. This drastically improves the previous best single pass streaming algorithm for computing a $t$-spanner which requires $\theta(mn^{\frac{2}{t}})$ time to process the stream and computes spanner with size slightly larger than the optimal. Our second algorithm is for {\em StreamSort} model introduced by Aggarwal et al. [FOCS 2004], which is the streaming model augmented with a sorting primitive. The {\em StreamSort} model has been shown to be a more powerful and still very realistic model than the streaming model for massive data sets applications. Our algorithm, which works of weighted graphs as well, performs $O(t)$ passes using $O(\log n)$ bits of working memory only. Our both the algorithms require elementary data structures.
Faster Streaming algorithms for graph spanners
4,058
Since 1969 \cite{C-MST69,S-SMJ77}, we know that any Presburger-definable set \cite{P-PCM29} (a set of integer vectors satisfying a formula in the first-order additive theory of the integers) can be represented by a state-based symmbolic representation, called in this paper Finite Digit Vector Automata (FDVA). Efficient algorithms for manipulating these sets have been recently developed. However, the problem of deciding if a FDVA represents such a set, is a well-known hard problem first solved by Muchnik in 1991 with a quadruply-exponential time algorithm. In this paper, we show how to determine in polynomial time whether a FDVA represents a Presburger-definable set, and we provide in this positive case a polynomial time algorithm that constructs a Presburger-formula that defines the same set.
Least Significant Digit First Presburger Automata
4,059
We consider a memory allocation problem that can be modeled as a version of bin packing where items may be split, but each bin may contain at most two (parts of) items. A 3/2-approximation algorithm and an NP-hardness proof for this problem was given by Chung et al. We give a simpler 3/2-approximation algorithm for it which is in fact an online algorithm. This algorithm also has good performance for the more general case where each bin may contain at most k parts of items. We show that this general case is also strongly NP-hard. Additionally, we give an efficient 7/5-approximation algorithm.
Improved results for a memory allocation problem
4,060
We introduce the concept of knowledge states; many well-known algorithms can be viewed as knowledge state algorithms. The knowledge state approach can be used to to construct competitive randomized online algorithms and study the tradeoff between competitiveness and memory. A knowledge state simply states conditional obligations of an adversary, by fixing a work function, and gives a distribution for the algorithm. When a knowledge state algorithm receives a request, it then calculates one or more "subsequent" knowledge states, together with a probability of transition to each. The algorithm then uses randomization to select one of those subsequents to be the new knowledge state. We apply the method to the paging problem. We present optimally competitive algorithm for paging for the cases where the cache sizes are k=2 and k=3. These algorithms use only a very limited number of bookmarks.
Knowledge State Algorithms: Randomization with Limited Information
4,061
For high volume data streams and large data warehouses, sampling is used for efficient approximate answers to aggregate queries over selected subsets. Mathematically, we are dealing with a set of weighted items and want to support queries to arbitrary subset sums. With unit weights, we can compute subset sizes which together with the previous sums provide the subset averages. The question addressed here is which sampling scheme we should use to get the most accurate subset sum estimates. We present a simple theorem on the variance of subset sum estimation and use it to prove variance optimality and near-optimality of subset sum estimation with different known sampling schemes. This variance is measured as the average over all subsets of any given size. By optimal we mean there is no set of input weights for which any sampling scheme can have a better average variance. Such powerful results can never be established experimentally. The results of this paper are derived mathematically. For example, we show that appropriately weighted systematic sampling is simultaneously optimal for all subset sizes. More standard schemes such as uniform sampling and probability-proportional-to-size sampling with replacement can be arbitrarily bad. Knowing the variance optimality of different sampling schemes can help deciding which sampling scheme to apply in a given context.
On the variance of subset sum estimation
4,062
We consider two optimization problems related to finding dense subgraphs. The densest at-least-k-subgraph problem (DalkS) is to find an induced subgraph of highest average degree among all subgraphs with at least k vertices, and the densest at-most-k-subgraph problem (DamkS) is defined similarly. These problems are related to the well-known densest k-subgraph problem (DkS), which is to find the densest subgraph on exactly k vertices. We show that DalkS can be approximated efficiently, while DamkS is nearly as hard to approximate as the densest k-subgraph problem.
Finding large and small dense subgraphs
4,063
The problem of computing the chromatic number of a $P_5$-free graph is known to be NP-hard. In contrast to this negative result, we show that determining whether or not a $P_5$-free graph admits a $k$-colouring, for each fixed number of colours $k$, can be done in polynomial time. If such a colouring exists, our algorithm produces it.
Deciding k-colourability of $P_5$-free graphs in polynomial time
4,064
A {\em leader election} algorithm is an elimination process that divides recursively into tow subgroups an initial group of n items, eliminates one subgroup and continues the procedure until a subgroup is of size 1. In this paper the biased case is analyzed. We are interested in the {\em cost} of the algorithm, i.e. the number of operations needed until the algorithm stops. Using a probabilistic approach, the asymptotic behavior of the algorithm is shown to be related to the behavior of a hitting time of two random sequences on [0,1].
A probabilistic analysis of a leader election algorithm
4,065
Let $v$ be a vertex of a graph $G$. By the local complementation of $G$ at $v$ we mean to complement the subgraph induced by the neighbors of $v$. This operator can be generalized as follows. Assume that, each edge of $G$ has a label in the finite field $\mathbf{F}_q$. Let $(g_{ij})$ be set of labels ($g_{ij}$ is the label of edge $ij$). We define two types of operators. For the first one, let $v$ be a vertex of $G$ and $a\in \mathbf{F}_q$, and obtain the graph with labels $g'_{ij}=g_{ij}+ag_{vi}g_{vj}$. For the second, if $0\neq b\in \mathbf{F}_q$ the resulted graph is a graph with labels $g''_{vi}=bg_{vi}$ and $g''_{ij}=g_{ij}$, for $i,j$ unequal to $v$. It is clear that if the field is binary, the operators are just local complementations that we described. The problem of whether two graphs are equivalent under local complementations has been studied, \cite{bouchalg}. Here we consider the general case and assuming that $q$ is odd, present the first known efficient algorithm to verify whether two graphs are locally equivalent or not.
An Efficient Algorithm to Recognize Locally Equivalent Graphs in Non-Binary Case
4,066
A streaming model is one where data items arrive over long period of time, either one item at a time or in bursts. Typical tasks include computing various statistics over a sliding window of some fixed time-horizon. What makes the streaming model interesting is that as the time progresses, old items expire and new ones arrive. One of the simplest and central tasks in this model is sampling. That is, the task of maintaining up to $k$ uniformly distributed items from a current time-window as old items expire and new ones arrive. We call sampling algorithms {\bf succinct} if they use provably optimal (up to constant factors) {\bf worst-case} memory to maintain $k$ items (either with or without replacement). We stress that in many applications structures that have {\em expected} succinct representation as the time progresses are not sufficient, as small probability events eventually happen with probability 1. Thus, in this paper we ask the following question: are Succinct Sampling on Streams (or $S^3$-algorithms)possible, and if so for what models? Perhaps somewhat surprisingly, we show that $S^3$-algorithms are possible for {\em all} variants of the problem mentioned above, i.e. both with and without replacement and both for one-at-a-time and bursty arrival models. Finally, we use $S^3$ algorithms to solve various problems in sliding windows model, including frequency moments, counting triangles, entropy and density estimations. For these problems we present \emph{first} solutions with provable worst-case memory guarantees.
Succinct Sampling on Streams
4,067
The DIMACS 32-bit parity problem is a satisfiability (SAT) problem hard to solve. So far, EqSatz by Li is the only solver which can solve this problem. However, This solver is very slow. It is reported that it spent 11855 seconds to solve a par32-5 instance on a Maxintosh G3 300 MHz. The paper introduces a new solver, XORSAT, which splits the original problem into two parts: structured part and random part, and then solves separately them with WalkSAT and an XOR equation solver. Based our empirical observation, XORSAT is surprisingly fast, which is approximately 1000 times faster than EqSatz. For a par32-5 instance, XORSAT took 2.9 seconds, while EqSatz took 2844 seconds on Intel Pentium IV 2.66GHz CPU. We believe that this method significantly different from traditional methods is also useful beyond this domain.
XORSAT: An Efficient Algorithm for the DIMACS 32-bit Parity Problem
4,068
We obtain a 1.5-approximation algorithm for the metric uncapacitated facility location problem (UFL), which improves on the previously best known 1.52-approximation algorithm by Mahdian, Ye and Zhang. Note, that the approximability lower bound by Guha and Khuller is 1.463. An algorithm is a {\em ($\lambda_f$,$\lambda_c$)-approximation algorithm} if the solution it produces has total cost at most $\lambda_f \cdot F^* + \lambda_c \cdot C^*$, where $F^*$ and $C^*$ are the facility and the connection cost of an optimal solution. Our new algorithm, which is a modification of the $(1+2/e)$-approximation algorithm of Chudak and Shmoys, is a (1.6774,1.3738)-approximation algorithm for the UFL problem and is the first one that touches the approximability limit curve $(\gamma_f, 1+2e^{-\gamma_f})$ established by Jain, Mahdian and Saberi. As a consequence, we obtain the first optimal approximation algorithm for instances dominated by connection costs. When combined with a (1.11,1.7764)-approximation algorithm proposed by Jain et al., and later analyzed by Mahdian et al., we obtain the overall approximation guarantee of 1.5 for the metric UFL problem. We also describe how to use our algorithm to improve the approximation ratio for the 3-level version of UFL.
An optimal bifactor approximation algorithm for the metric uncapacitated facility location problem
4,069
NLC-width is a variant of clique-width with many application in graph algorithmic. This paper is devoted to graphs of NLC-width two. After giving new structural properties of the class, we propose a $O(n^2 m)$-time algorithm, improving Johansson's algorithm \cite{Johansson00}. Moreover, our alogrithm is simple to understand. The above properties and algorithm allow us to propose a robust $O(n^2 m)$-time isomorphism algorithm for NLC-2 graphs. As far as we know, it is the first polynomial-time algorithm.
NLC-2 graph recognition and isomorphism
4,070
Tag clouds provide an aggregate of tag-usage statistics. They are typically sent as in-line HTML to browsers. However, display mechanisms suited for ordinary text are not ideal for tags, because font sizes may vary widely on a line. As well, the typical layout does not account for relationships that may be known between tags. This paper presents models and algorithms to improve the display of tag clouds that consist of in-line HTML, as well as algorithms that use nested tables to achieve a more general 2-dimensional layout in which tag relationships are considered. The first algorithms leverage prior work in typesetting and rectangle packing, whereas the second group of algorithms leverage prior work in Electronic Design Automation. Experiments show our algorithms can be efficiently implemented and perform well.
Tag-Cloud Drawing: Algorithms for Cloud Visualization
4,071
In this paper, we introduce the on-line Viterbi algorithm for decoding hidden Markov models (HMMs) in much smaller than linear space. Our analysis on two-state HMMs suggests that the expected maximum memory used to decode sequence of length $n$ with $m$-state HMM can be as low as $\Theta(m\log n)$, without a significant slow-down compared to the classical Viterbi algorithm. Classical Viterbi algorithm requires $O(mn)$ space, which is impractical for analysis of long DNA sequences (such as complete human genome chromosomes) and for continuous data streams. We also experimentally demonstrate the performance of the on-line Viterbi algorithm on a simple HMM for gene finding on both simulated and real DNA sequences.
On-line Viterbi Algorithm and Its Relationship to Random Walks
4,072
A new incremental algorithm for data compression is presented. For a sequence of input symbols algorithm incrementally constructs a p-adic integer number as an output. Decoding process starts with less significant part of a p-adic integer and incrementally reconstructs a sequence of input symbols. Algorithm is based on certain features of p-adic numbers and p-adic norm. p-adic coding algorithm may be considered as of generalization a popular compression technique - arithmetic coding algorithms. It is shown that for p = 2 the algorithm works as integer variant of arithmetic coding; for a special class of models it gives exactly the same codes as Huffman's algorithm, for another special model and a specific alphabet it gives Golomb-Rice codes.
P-adic arithmetic coding
4,073
We introduce the straggler identification problem, in which an algorithm must determine the identities of the remaining members of a set after it has had a large number of insertion and deletion operations performed on it, and now has relatively few remaining members. The goal is to do this in o(n) space, where n is the total number of identities. The straggler identification problem has applications, for example, in determining the set of unacknowledged packets in a high-bandwidth multicast data stream. We provide a deterministic solution to the straggler identification problem that uses only O(d log n) bits and is based on a novel application of Newton's identities for symmetric polynomials. This solution can identify any subset of d stragglers from a set of n O(log n)-bit identifiers, assuming that there are no false deletions of identities not already in the set. Indeed, we give a lower bound argument that shows that any small-space deterministic solution to the straggler identification problem cannot be guaranteed to handle false deletions. Nevertheless, we show that there is a simple randomized solution using O(d log n log(1/epsilon)) bits that can maintain a multiset and solve the straggler identification problem, tolerating false deletions, where epsilon>0 is a user-defined parameter bounding the probability of an incorrect response. This randomized solution is based on a new type of Bloom filter, which we call the invertible Bloom filter.
Straggler Identification in Round-Trip Data Streams via Newton's Identities and Invertible Bloom Filters
4,074
This paper describes an approach for obtaining direct access to the attacked squares of sliding pieces without resorting to rotated bitboards. The technique involves creating four hash tables using the built in hash arrays from an interpreted, high level language. The rank, file, and diagonal occupancy are first isolated by masking the desired portion of the board. The attacked squares are then directly retrieved from the hash tables. Maintaining incrementally updated rotated bitboards becomes unnecessary as does all the updating, mapping and shifting required to access the attacked squares. Finally, rotated bitboard move generation speed is compared with that of the direct hash table lookup method.
Avoiding Rotated Bitboards with Direct Lookup
4,075
This paper presents a hybrid approach to spatial indexing of two dimensional data. It sheds new light on the age old problem by thinking of the traditional algorithms as working with images. Inspiration is drawn from an analogous situation that is found in machine and human vision. Image processing techniques are used to assist in the spatial indexing of the data. A fixed grid approach is used and bins with too many records are sub-divided hierarchically. Search queries are pre-computed for bins that do not contain any data records. This has the effect of dividing the search space up into non rectangular regions which are based on the spatial properties of the data. The bucketing quad tree can be considered as an image with a resolution of two by two for each layer. The results show that this method performs better than the quad tree if there are more divisions per layer. This confirms our suspicions that the algorithm works better if it gets to look at the data with higher resolution images. An elegant class structure is developed where the implementation of concrete spatial indexes for a particular data type merely relies on rendering the data onto an image.
Using Images to create a Hierarchical Grid Spatial Index
4,076
In this paper we consider module-composed graphs, i.e. graphs which can be defined by a sequence of one-vertex insertions v_1,...,v_n, such that the neighbourhood of vertex v_i, 2<= i<= n, forms a module (a homogeneous set) of the graph defined by vertices v_1,..., v_{i-1}. We show that module-composed graphs are HHDS-free and thus homogeneously orderable, weakly chordal, and perfect. Every bipartite distance hereditary graph, every (co-2C_4,P_4)-free graph and thus every trivially perfect graph is module-composed. We give an O(|V_G|(|V_G|+|E_G|)) time algorithm to decide whether a given graph G is module-composed and construct a corresponding module-sequence. For the case of bipartite graphs, module-composed graphs are exactly distance hereditary graphs, which implies simple linear time algorithms for their recognition and construction of a corresponding module-sequence.
A note on module-composed graphs
4,077
Setcover greedy algorithm is a natural approximation algorithm for test set problem. This paper gives a precise and tighter analysis of performance guarantee of this algorithm. The author improves the performance guarantee $2\ln n$ which derives from set cover problem to $1.1354\ln n$ by applying the potential function technique. In addition, the author gives a nontrivial lower bound $1.0004609\ln n$ of performance guarantee of this algorithm. This lower bound, together with the matching bound of information content heuristic, confirms the fact information content heuristic is slightly better than setcover greedy algorithm in worst case.
A Tighter Analysis of Setcover Greedy Algorithm for Test Set
4,078
We consider the well known \emph{Least Recently Used} (LRU) replacement algorithm and analyze it under the independent reference model and generalized power-law demand. For this extensive family of demand distributions we derive a closed-form expression for the per object steady-state hit ratio. To the best of our knowledge, this is the first analytic derivation of the per object hit ratio of LRU that can be obtained in constant time without requiring laborious numeric computations or simulation. Since most applications of replacement algorithms include (at least) some scenarios under i.i.d. requests, our method has substantial practical value, especially when having to analyze multiple caches, where existing numeric methods and simulation become too time consuming.
A Closed-Form Method for LRU Replacement under Generalized Power-Law Demand
4,079
We show that the absolute worst case time complexity for Hopcroft's minimization algorithm applied to unary languages is reached only for de Bruijn words. A previous paper by Berstel and Carton gave the example of de Bruijn words as a language that requires O(n log n) steps by carefully choosing the splitting sets and processing these sets in a FIFO mode. We refine the previous result by showing that the Berstel/Carton example is actually the absolute worst case time complexity in the case of unary languages. We also show that a LIFO implementation will not achieve the same worst time complexity for the case of unary languages. Lastly, we show that the same result is valid also for the cover automata and a modification of the Hopcroft's algorithm, modification used in minimization of cover automata.
On the Hopcroft's minimization algorithm
4,080
A quantum algorithm is a set of instructions for a quantum computer, however, unlike algorithms in classical computer science their results cannot be guaranteed. A quantum system can undergo two types of operation, measurement and quantum state transformation, operations themselves must be unitary (reversible). Most quantum algorithms involve a series of quantum state transformations followed by a measurement. Currently very few quantum algorithms are known and no general design methodology exists for their construction.
Grover search algorithm
4,081
In this paper, we propose a useful replacement for quicksort-style utility functions. The replacement is called Symmetry Partition Sort, which has essentially the same principle as Proportion Extend Sort. The maximal difference between them is that the new algorithm always places already partially sorted inputs (used as a basis for the proportional extension) on both ends when entering the partition routine. This is advantageous to speeding up the partition routine. The library function based on the new algorithm is more attractive than Psort which is a library function introduced in 2004. Its implementation mechanism is simple. The source code is clearer. The speed is faster, with O(n log n) performance guarantee. Both the robustness and adaptivity are better. As a library function, it is competitive.
Symmetry Partition Sort
4,082
We raise the question of approximating the compressibility of a string with respect to a fixed compression scheme, in sublinear time. We study this question in detail for two popular lossless compression schemes: run-length encoding (RLE) and Lempel-Ziv (LZ), and present sublinear algorithms for approximating compressibility with respect to both schemes. We also give several lower bounds that show that our algorithms for both schemes cannot be improved significantly. Our investigation of LZ yields results whose interest goes beyond the initial questions we set out to study. In particular, we prove combinatorial structural lemmas that relate the compressibility of a string with respect to Lempel-Ziv to the number of distinct short substrings contained in it. In addition, we show that approximating the compressibility with respect to LZ is related to approximating the support size of a distribution.
Sublinear Algorithms for Approximating String Compressibility
4,083
We develop an experimental algorithm for the exact solving of the maximum independent set problem. The algorithm consecutively finds the maximal independent sets of vertices in an arbitrary undirected graph such that the next such set contains more elements than the preceding one. For this purpose, we use a technique, developed by Ford and Fulkerson for the finite partially ordered sets, in particular, their method for partition of a poset into the minimum number of chains with finding the maximum antichain. In the process of solving, a special digraph is constructed, and a conjecture is formulated concerning properties of such digraph. This allows to offer of the solution algorithm. Its theoretical estimation of running time equals to is $O(n^{8})$, where $n$ is the number of graph vertices. The offered algorithm was tested by a program on random graphs. The testing the confirms correctness of the algorithm.
Experimental Algorithm for the Maximum Independent Set Problem
4,084
It is well known that n integers in the range [1,n^c] can be sorted in O(n) time in the RAM model using radix sorting. More generally, integers in any range [1,U] can be sorted in O(n sqrt{loglog n}) time. However, these algorithms use O(n) words of extra memory. Is this necessary? We present a simple, stable, integer sorting algorithm for words of size O(log n), which works in O(n) time and uses only O(1) words of extra memory on a RAM model. This is the integer sorting case most useful in practice. We extend this result with same bounds to the case when the keys are read-only, which is of theoretical interest. Another interesting question is the case of arbitrary c. Here we present a black-box transformation from any RAM sorting algorithm to a sorting algorithm which uses only O(1) extra space and has the same running time. This settles the complexity of in-place sorting in terms of the complexity of sorting.
Radix Sorting With No Extra Space
4,085
We study the problem of assigning jobs to applicants. Each applicant has a weight and provides a preference list ranking a subset of the jobs. A matching M is popular if there is no other matching M' such that the weight of the applicants who prefer M' over M exceeds the weight of those who prefer M over M'. This paper gives efficient algorithms to find a popular matching if one exists.
Weighted Popular Matchings
4,086
The k-forest problem is a common generalization of both the k-MST and the dense-$k$-subgraph problems. Formally, given a metric space on $n$ vertices $V$, with $m$ demand pairs $\subseteq V \times V$ and a ``target'' $k\le m$, the goal is to find a minimum cost subgraph that connects at least $k$ demand pairs. In this paper, we give an $O(\min\{\sqrt{n},\sqrt{k}\})$-approximation algorithm for $k$-forest, improving on the previous best ratio of $O(n^{2/3}\log n)$ by Segev & Segev. We then apply our algorithm for k-forest to obtain approximation algorithms for several Dial-a-Ride problems. The basic Dial-a-Ride problem is the following: given an $n$ point metric space with $m$ objects each with its own source and destination, and a vehicle capable of carrying at most $k$ objects at any time, find the minimum length tour that uses this vehicle to move each object from its source to destination. We prove that an $\alpha$-approximation algorithm for the $k$-forest problem implies an $O(\alpha\cdot\log^2n)$-approximation algorithm for Dial-a-Ride. Using our results for $k$-forest, we get an $O(\min\{\sqrt{n},\sqrt{k}\}\cdot\log^2 n)$- approximation algorithm for Dial-a-Ride. The only previous result known for Dial-a-Ride was an $O(\sqrt{k}\log n)$-approximation by Charikar & Raghavachari; our results give a different proof of a similar approximation guarantee--in fact, when the vehicle capacity $k$ is large, we give a slight improvement on their results.
Dial a Ride from k-forest
4,087
In this paper we study noisy sorting without re-sampling. In this problem there is an unknown order $a_{\pi(1)} < ... < a_{\pi(n)}$ where $\pi$ is a permutation on $n$ elements. The input is the status of $n \choose 2$ queries of the form $q(a_i,x_j)$, where $q(a_i,a_j) = +$ with probability at least $1/2+\ga$ if $\pi(i) > \pi(j)$ for all pairs $i \neq j$, where $\ga > 0$ is a constant and $q(a_i,a_j) = -q(a_j,a_i)$ for all $i$ and $j$. It is assumed that the errors are independent. Given the status of the queries the goal is to find the maximum likelihood order. In other words, the goal is find a permutation $\sigma$ that minimizes the number of pairs $\sigma(i) > \sigma(j)$ where $q(\sigma(i),\sigma(j)) = -$. The problem so defined is the feedback arc set problem on distributions of inputs, each of which is a tournament obtained as a noisy perturbations of a linear order. Note that when $\ga < 1/2$ and $n$ is large, it is impossible to recover the original order $\pi$. It is known that the weighted feedback are set problem on tournaments is NP-hard in general. Here we present an algorithm of running time $n^{O(\gamma^{-4})}$ and sampling complexity $O_{\gamma}(n \log n)$ that with high probability solves the noisy sorting without re-sampling problem. We also show that if $a_{\sigma(1)},a_{\sigma(2)},...,a_{\sigma(n)}$ is an optimal solution of the problem then it is ``close'' to the original order. More formally, with high probability it holds that $\sum_i |\sigma(i) - \pi(i)| = \Theta(n)$ and $\max_i |\sigma(i) - \pi(i)| = \Theta(\log n)$. Our results are of interest in applications to ranking, such as ranking in sports, or ranking of search items based on comparisons by experts.
Noisy Sorting Without Resampling
4,088
The Lp regression problem takes as input a matrix $A \in \Real^{n \times d}$, a vector $b \in \Real^n$, and a number $p \in [1,\infty)$, and it returns as output a number ${\cal Z}$ and a vector $x_{opt} \in \Real^d$ such that ${\cal Z} = \min_{x \in \Real^d} ||Ax -b||_p = ||Ax_{opt}-b||_p$. In this paper, we construct coresets and obtain an efficient two-stage sampling-based approximation algorithm for the very overconstrained ($n \gg d$) version of this classical problem, for all $p \in [1, \infty)$. The first stage of our algorithm non-uniformly samples $\hat{r}_1 = O(36^p d^{\max\{p/2+1, p\}+1})$ rows of $A$ and the corresponding elements of $b$, and then it solves the Lp regression problem on the sample; we prove this is an 8-approximation. The second stage of our algorithm uses the output of the first stage to resample $\hat{r}_1/\epsilon^2$ constraints, and then it solves the Lp regression problem on the new sample; we prove this is a $(1+\epsilon)$-approximation. Our algorithm unifies, improves upon, and extends the existing algorithms for special cases of Lp regression, namely $p = 1,2$. In course of proving our result, we develop two concepts--well-conditioned bases and subspace-preserving sampling--that are of independent interest.
Sampling Algorithms and Coresets for Lp Regression
4,089
We introduce a new technique to bound the asymptotic performance of splay trees. The basic idea is to transcribe, in an indirect fashion, the rotations performed by the splay tree as a Davenport-Schinzel sequence S, none of whose subsequences are isomorphic to fixed forbidden subsequence. We direct this technique towards Tarjan's deque conjecture and prove that n deque operations require O(n alpha^*(n)) time, where alpha^*(n) is the minimum number of applications of the inverse-Ackermann function mapping n to a constant. We are optimistic that this approach could be directed towards other open conjectures on splay trees such as the traversal and split conjectures.
Splay Trees, Davenport-Schinzel Sequences, and the Deque Conjecture
4,090
A pair of complementary algorithms are presented. One of the pair is a fast method for connecting graphs with an edge. The other is a fast method for removing edges from a graph. Both algorithms employ the same tree based graph representation and so, in concert, can arbitrarily modify any graph. Since the clusters of a percolation model may be described as simple connected graphs, an efficient Monte Carlo scheme can be constructed that uses the algorithms to sweep the occupation probability back and forth between two turning points. This approach concentrates computational sampling time within a region of interest. A high precision value of pc = 0.59274603(9) was thus obtained, by Mersenne twister, for the two dimensional square site percolation threshold.
Complementary algorithms for graphs and percolation
4,091
Tree structures are very often used data structures. Among ordered types of trees there are many variants whose basic operations such as insert, delete, search, delete-min are characterized by logarithmic time complexity. In the article I am going to present the structure whose time complexity for each of the above operations is $O(\frac{M}{K} + K)$, where M is the size of data type and K is constant properly matching the size of data type. Properly matched K will make the structure function as a very effective Priority Queue. The structure size linearly depends on the number and size of elements. PTrie is a clever combination of the idea of prefix tree -- Trie, structure of logarithmic time complexity for insert and remove operations, doubly linked list and queues.
Priority Queue Based on Multilevel Prefix Tree
4,092
In this paper I present general outlook on questions relevant to the basic graph algorithms; Finding the Shortest Path with Positive Weights and Minimum Spanning Tree. I will show so far known solution set of basic graph problems and present my own. My solutions to graph problems are characterized by their linear worst-case time complexity. It should be noticed that the algorithms which compute the Shortest Path and Minimum Spanning Tree problems not only analyze the weight of arcs (which is the main and often the only criterion of solution hitherto known algorithms) but also in case of identical path weights they select this path which walks through as few vertices as possible. I have presented algorithms which use priority queue based on multilevel prefix tree -- PTrie. PTrie is a clever combination of the idea of prefix tree -- Trie, the structure of logarithmic time complexity for insert and remove operations, doubly linked list and queues. In C++ I will implement linear worst-case time algorithm computing the Single-Destination Shortest-Paths problem and I will explain its usage.
Linear Time Algorithms Based on Multilevel Prefix Tree for Finding Shortest Path with Positive Weights and Minimum Spanning Tree in a Networks
4,093
Many data analysis applications deal with large matrices and involve approximating the matrix using a small number of ``components.'' Typically, these components are linear combinations of the rows and columns of the matrix, and are thus difficult to interpret in terms of the original features of the input data. In this paper, we propose and study matrix approximations that are explicitly expressed in terms of a small number of columns and/or rows of the data matrix, and thereby more amenable to interpretation in terms of the original data. Our main algorithmic results are two randomized algorithms which take as input an $m \times n$ matrix $A$ and a rank parameter $k$. In our first algorithm, $C$ is chosen, and we let $A'=CC^+A$, where $C^+$ is the Moore-Penrose generalized inverse of $C$. In our second algorithm $C$, $U$, $R$ are chosen, and we let $A'=CUR$. ($C$ and $R$ are matrices that consist of actual columns and rows, respectively, of $A$, and $U$ is a generalized inverse of their intersection.) For each algorithm, we show that with probability at least $1-\delta$: $$ ||A-A'||_F \leq (1+\epsilon) ||A-A_k||_F, $$ where $A_k$ is the ``best'' rank-$k$ approximation provided by truncating the singular value decomposition (SVD) of $A$. The number of columns of $C$ and rows of $R$ is a low-degree polynomial in $k$, $1/\epsilon$, and $\log(1/\delta)$. Our two algorithms are the first polynomial time algorithms for such low-rank matrix approximations that come with relative-error guarantees; previously, in some cases, it was not even known whether such matrix decompositions exist. Both of our algorithms are simple, they take time of the order needed to approximately compute the top $k$ singular vectors of $A$, and they use a novel, intuitive sampling method called ``subspace sampling.''
Relative-Error CUR Matrix Decompositions
4,094
The unit cost model is both convenient and largely realistic for describing integer decision algorithms over (+,*). Additional operations like division with remainder or bitwise conjunction, although equally supported by computing hardware, may lead to a considerable drop in complexity. We show a variety of concrete problems to benefit from such NON-arithmetic primitives by presenting and analyzing corresponding fast algorithms.
On Faster Integer Calculations using Non-Arithmetic Primitives
4,095
There is a growing body of work on sorting and selection in models other than the unit-cost comparison model. This work is the first treatment of a natural stochastic variant of the problem where the cost of comparing two elements is a random variable. Each cost is chosen independently and is known to the algorithm. In particular we consider the following three models: each cost is chosen uniformly in the range $[0,1]$, each cost is 0 with some probability $p$ and 1 otherwise, or each cost is 1 with probability $p$ and infinite otherwise. We present lower and upper bounds (optimal in most cases) for these problems. We obtain our upper bounds by carefully designing algorithms to ensure that the costs incurred at various stages are independent and using properties of random partial orders when appropriate.
Sorting and Selection with Random Costs
4,096
Least squares approximation is a technique to find an approximate solution to a system of linear equations that has no exact solution. In a typical setting, one lets $n$ be the number of constraints and $d$ be the number of variables, with $n \gg d$. Then, existing exact methods find a solution vector in $O(nd^2)$ time. We present two randomized algorithms that provide very accurate relative-error approximations to the optimal value and the solution vector of a least squares approximation problem more rapidly than existing exact algorithms. Both of our algorithms preprocess the data with the Randomized Hadamard Transform. One then uniformly randomly samples constraints and solves the smaller problem on those constraints, and the other performs a sparse random projection and solves the smaller problem on those projected coordinates. In both cases, solving the smaller problem provides relative-error approximations, and, if $n$ is sufficiently larger than $d$, the approximate solution can be computed in $O(nd \log d)$ time.
Faster Least Squares Approximation
4,097
We address the problem of minimizing power consumption when performing reliable broadcast on a radio network under the following popular model. Each node in the network is located on a point in a two dimensional grid, and whenever a node sends a message, all awake nodes within distance r receive the message. In the broadcast problem, some node wants to successfully send a message to all other nodes in the network even when up to a 1/2 fraction of the nodes within every neighborhood can be deleted by an adversary. The set of deleted nodes is carefully chosen by the adversary to foil our algorithm and moreover, the set of deleted nodes may change periodically. This models worst-case behavior due to mobile nodes, static nodes losing power or simply some points in the grid being unoccupied. A trivial solution requires each node in the network to be awake roughly 1/2 the time, and a trivial lower bound shows that each node must be awake for at least a 1/n fraction of the time. Our first result is an algorithm that requires each node to be awake for only a 1/sqrt(n) fraction of the time in expectation. Our algorithm achieves this while ensuring correctness with probability 1, and keeping optimal values for other resource costs such as latency and number of messages sent. We give a lower-bound that shows that this reduction in power consumption is asymptotically optimal when latency and number of messages sent must be optimal. If we can increase the latency and messages sent by only a log*n factor we give a Las Vegas algorithm that requires each node to be awake for only a (log*n)/n expected fraction of the time; we give a lower-bound showing that this second algorithm is near optimal. Finally, we show how to ensure energy-efficient broadcast in the presence of Byzantine faults.
Sleeping on the Job: Energy-Efficient Broadcast for Radio Networks
4,098
We give a new algorithm for performing the distinct-degree factorization of a polynomial P(x) over GF(2), using a multi-level blocking strategy. The coarsest level of blocking replaces GCD computations by multiplications, as suggested by Pollard (1975), von zur Gathen and Shoup (1992), and others. The novelty of our approach is that a finer level of blocking replaces multiplications by squarings, which speeds up the computation in GF(2)[x]/P(x) of certain interval polynomials when P(x) is sparse. As an application we give a fast algorithm to search for all irreducible trinomials x^r + x^s + 1 of degree r over GF(2), while producing a certificate that can be checked in less time than the full search. Naive algorithms cost O(r^2) per trinomial, thus O(r^3) to search over all trinomials of given degree r. Under a plausible assumption about the distribution of factors of trinomials, the new algorithm has complexity O(r^2 (log r)^{3/2}(log log r)^{1/2}) for the search over all trinomials of degree r. Our implementation achieves a speedup of greater than a factor of 560 over the naive algorithm in the case r = 24036583 (a Mersenne exponent). Using our program, we have found two new primitive trinomials of degree 24036583 over GF(2) (the previous record degree was 6972593).
A Multi-level Blocking Distinct Degree Factorization Algorithm
4,099