text
stringlengths
17
3.36M
source
stringlengths
3
333
__index_level_0__
int64
0
518k
We present two algorithms for maintaining the topological order of a directed acyclic graph with n vertices, under an online edge insertion sequence of m edges. Efficient algorithms for online topological ordering have many applications, including online cycle detection, which is to discover the first edge that introduces a cycle under an arbitrary sequence of edge insertions in a directed graph. In this paper we present efficient algorithms for the online topological ordering problem. We first present a simple algorithm with running time O(n^{5/2}) for the online topological ordering problem. This is the current fastest algorithm for this problem on dense graphs, i.e., when m > n^{5/3}. We then present an algorithm with running time O((m + nlog n)\sqrt{m}); this is more efficient for sparse graphs. Our results yield an improved upper bound of O(min(n^{5/2}, (m + nlog n)sqrt{m})) for the online topological ordering problem.
Faster Algorithms for Online Topological Ordering
4,100
Motivated by an application in computational topology, we consider a novel variant of the problem of efficiently maintaining dynamic rooted trees. This variant requires merging two paths in a single operation. In contrast to the standard problem, in which only one tree arc changes at a time, a single merge operation can change many arcs. In spite of this, we develop a data structure that supports merges on an n-node forest in O(log^2 n) amortized time and all other standard tree operations in O(log n) time (amortized, worst-case, or randomized depending on the underlying data structure). For the special case that occurs in the motivating application, in which arbitrary arc deletions (cuts) are not allowed, we give a data structure with an O(log n) time bound per operation. This is asymptotically optimal under certain assumptions. For the even-more special case in which both cuts and parent queries are disallowed, we give an alternative O(log n)-time solution that uses standard dynamic trees as a black box. This solution also applies to the motivating application. Our methods use previous work on dynamic trees in various ways, but the analysis of each algorithm requires novel ideas. We also investigate lower bounds for the problem under various assumptions.
Data Structures for Mergeable Trees
4,101
We present approximation algorithms for almost all variants of the multi-criteria traveling salesman problem (TSP). First, we devise randomized approximation algorithms for multi-criteria maximum traveling salesman problems (Max-TSP). For multi-criteria Max-STSP, where the edge weights have to be symmetric, we devise an algorithm with an approximation ratio of 2/3 - eps. For multi-criteria Max-ATSP, where the edge weights may be asymmetric, we present an algorithm with a ratio of 1/2 - eps. Our algorithms work for any fixed number k of objectives. Furthermore, we present a deterministic algorithm for bi-criteria Max-STSP that achieves an approximation ratio of 7/27. Finally, we present a randomized approximation algorithm for the asymmetric multi-criteria minimum TSP with triangle inequality Min-ATSP. This algorithm achieves a ratio of log n + eps.
On Approximating Multi-Criteria TSP
4,102
The Metric Traveling Salesman Problem (TSP) is a classical NP-hard optimization problem. The double-tree shortcutting method for Metric TSP yields an exponentially-sized space of TSP tours, each of which approximates the optimal solution within at most a factor of 2. We consider the problem of finding among these tours the one that gives the closest approximation, i.e.\ the \emph{minimum-weight double-tree shortcutting}. Previously, we gave an efficient algorithm for this problem, and carried out its experimental analysis. In this paper, we address the related question of the worst-case approximation ratio for the minimum-weight double-tree shortcutting method. In particular, we give lower bounds on the approximation ratio in some specific metric spaces: the ratio of 2 in the discrete shortest path metric, 1.622 in the planar Euclidean metric, and 1.666 in the planar Minkowski metric. The first of these lower bounds is tight; we conjecture that the other two bounds are also tight, and in particular that the minimum-weight double-tree method provides a 1.622-approximation for planar Euclidean TSP.
Minimum-weight double-tree shortcutting for Metric TSP: Bounding the approximation ratio
4,103
We consider the problem of finding a feasible single-commodity flow in a strongly connected network with fixed supplies and demands, provided that the sum of supplies equals the sum of demands and the minimum arc capacity is at least this sum. A fast algorithm for this problem improves the worst-case time bound of the Goldberg-Rao maximum flow method by a constant factor. Erlebach and Hagerup gave an linear-time feasible flow algorithm. We give an arguably simpler one.
Finding a Feasible Flow in a Strongly Connected Network
4,104
We propose a fully dynamic algorithm for maintaining reachability information in directed graphs. The proposed deterministic dynamic algorithm has an update time of $O((ins*n^{2}) + (del * (m+n*log(n))))$ where $m$ is the current number of edges, $n$ is the number of vertices in the graph, $ins$ is the number of edge insertions and $del$ is the number of edge deletions. Each query can be answered in O(1) time after each update. The proposed algorithm combines existing fully dynamic reachability algorithm with well known witness counting technique to improve efficiency of maintaining reachability information when edges are deleted. The proposed algorithm improves by a factor of $O(\frac{n^2}{m+n*log(n)})$ for edge deletion over the best existing fully dynamic algorithm for maintaining reachability information.
Improved Fully Dynamic Reachability Algorithm for Directed Graph
4,105
The restless bandit problem is one of the most well-studied generalizations of the celebrated stochastic multi-armed bandit problem in decision theory. In its ultimate generality, the restless bandit problem is known to be PSPACE-Hard to approximate to any non-trivial factor, and little progress has been made despite its importance in modeling activity allocation under uncertainty. We consider a special case that we call Feedback MAB, where the reward obtained by playing each of n independent arms varies according to an underlying on/off Markov process whose exact state is only revealed when the arm is played. The goal is to design a policy for playing the arms in order to maximize the infinite horizon time average expected reward. This problem is also an instance of a Partially Observable Markov Decision Process (POMDP), and is widely studied in wireless scheduling and unmanned aerial vehicle (UAV) routing. Unlike the stochastic MAB problem, the Feedback MAB problem does not admit to greedy index-based optimal policies. We develop a novel and general duality-based algorithmic technique that yields a surprisingly simple and intuitive 2+epsilon-approximate greedy policy to this problem. We then define a general sub-class of restless bandit problems that we term Monotone bandits, for which our policy is a 2-approximation. Our technique is robust enough to handle generalizations of these problems to incorporate various side-constraints such as blocking plays and switching costs. This technique is also of independent interest for other restless bandit problems. By presenting the first (and efficient) O(1) approximations for non-trivial instances of restless bandits as well as of POMDPs, our work initiates the study of approximation algorithms in both these contexts.
Approximation Algorithms for Restless Bandit Problems
4,106
Let ${\cal V}$ be a finite set of $n$ elements and ${\cal F}=\{X_1,X_2, >..., X_m\}$ a family of $m$ subsets of ${\cal V}.$ Two sets $X_i$ and $X_j$ of ${\cal F}$ overlap if $X_i \cap X_j \neq \emptyset,$ $X_j \setminus X_i \neq \emptyset,$ and $X_i \setminus X_j \neq \emptyset.$ Two sets $X,Y\in {\cal F}$ are in the same overlap class if there is a series $X=X_1,X_2, ..., X_k=Y$ of sets of ${\cal F}$ in which each $X_iX_{i+1}$ overlaps. In this note, we focus on efficiently identifying all overlap classes in $O(n+\sum_{i=1}^m |X_i|)$ time. We thus revisit the clever algorithm of Dahlhaus of which we give a clear presentation and that we simplify to make it practical and implementable in its real worst case complexity. An useful variant of Dahlhaus's approach is also explained.
A Note On Computing Set Overlap Classes
4,107
Orienteering is the following optimization problem: given an edge-weighted graph (directed or undirected), two nodes s,t and a time limit T, find an s-t walk of total length at most T that maximizes the number of distinct nodes visited by the walk. One obtains a generalization, namely orienteering with time-windows (also referred to as TSP with time-windows), if each node v has a specified time-window [R(v), D(v)] and a node v is counted as visited by the walk only if v is visited during its time-window. For the time-window problem, an O(\log \opt) approximation can be achieved even for directed graphs if the algorithm is allowed quasi-polynomial time. However, the best known polynomial time approximation ratios are O(\log^2 \opt) for undirected graphs and O(\log^4 \opt) in directed graphs. In this paper we make some progress towards closing this discrepancy, and in the process obtain improved approximation ratios in several natural settings. Let L(v) = D(v) - R(v) denote the length of the time-window for v and let \lmax = \max_v L(v) and \lmin = \min_v L(v). Our results are given below with \alpha denoting the known approximation ratio for orienteering (without time-windows). Currently \alpha = (2+\eps) for undirected graphs and \alpha = O(\log^2 \opt) in directed graphs. 1. An O(\alpha \log \lmax) approximation when R(v) and D(v) are integer valued for each v. 2. An O(\alpha \max{\log \opt, \log \frac{\lmax}{\lmin}}) approximation. 3. An O(\alpha \log \frac{\lmax}{\lmin}) approximation when no start and end points are specified. In particular, if \frac{\lmax}{\lmin} is poly-bounded, we obtain an O(\log n) approximation for the time-window problem in undirected graphs.
Approximation Algorithms for Orienteering with Time Windows
4,108
When a store sells items to customers, the store wishes to determine the prices of the items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy those items, and also assume that each item i \in V has the production cost d_i and each customer e_j \in E has the valuation v_j on the bundle e_j \subseteq V of items. When the store sells an item i \in V at the price r_i, the profit for the item i is p_i=r_i-d_i. The goal of the store is to decide the price of each item to maximize its total profit. In most of the previous works, the item pricing problem was considered under the assumption that p_i \geq 0 for each i \in V, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of loss-leader, and showed that the seller can get more total profit in the case that p_i < 0 is allowed than in the case that p_i < 0 is not allowed. In this paper, we consider the line and the cycle highway problem, and show approximation algorithms for the line and/or cycle highway problem for which the smallest valuation is s and the largest valuation is \ell or all valuations are identical.
Approximation Algorithms for the Highway Problem under the Coupon Model
4,109
A compressed full-text self-index represents a text in a compressed form and still answers queries efficiently. This technology represents a breakthrough over the text indexing techniques of the previous decade, whose indexes required several times the size of the text. Although it is relatively new, this technology has matured up to a point where theoretical research is giving way to practical developments. Nonetheless this requires significant programming skills, a deep engineering effort, and a strong algorithmic background to dig into the research results. To date only isolated implementations and focused comparisons of compressed indexes have been reported, and they missed a common API, which prevented their re-use or deployment within other applications. The goal of this paper is to fill this gap. First, we present the existing implementations of compressed indexes from a practitioner's point of view. Second, we introduce the Pizza&Chili site, which offers tuned implementations and a standardized API for the most successful compressed full-text self-indexes, together with effective testbeds and scripts for their automatic validation and test. Third, we show the results of our extensive experiments on these codes with the aim of demonstrating the practical relevance of this novel and exciting technology.
Compressed Text Indexes:From Theory to Practice!
4,110
The Steiner tree problem is a classical NP-hard optimization problem with a wide range of practical applications. In an instance of this problem, we are given an undirected graph G=(V,E), a set of terminals R, and non-negative costs c_e for all edges e in E. Any tree that contains all terminals is called a Steiner tree; the goal is to find a minimum-cost Steiner tree. The nodes V R are called Steiner nodes. The best approximation algorithm known for the Steiner tree problem is due to Robins and Zelikovsky (SIAM J. Discrete Math, 2005); their greedy algorithm achieves a performance guarantee of 1+(ln 3)/2 ~ 1.55. The best known linear (LP)-based algorithm, on the other hand, is due to Goemans and Bertsimas (Math. Programming, 1993) and achieves an approximation ratio of 2-2/|R|. In this paper we establish a link between greedy and LP-based approaches by showing that Robins and Zelikovsky's algorithm has a natural primal-dual interpretation with respect to a novel partition-based linear programming relaxation. We also exhibit surprising connections between the new formulation and existing LPs and we show that the new LP is stronger than the bidirected cut formulation. An instance is b-quasi-bipartite if each connected component of G R has at most b vertices. We show that Robins' and Zelikovsky's algorithm has an approximation ratio better than 1+(ln 3)/2 for such instances, and we prove that the integrality gap of our LP is between 8/7 and (2b+1)/(b+1).
A Partition-Based Relaxation For Steiner Trees
4,111
The bottleneck network flow problem (BNFP) is a generalization of several well-studied bottleneck problems such as the bottleneck transportation problem (BTP), bottleneck assignment problem (BAP), bottleneck path problem (BPP), and so on. In this paper we provide a review of important results on this topic and its various special cases. We observe that the BNFP can be solved as a sequence of $O(\log n)$ maximum flow problems. However, special augmenting path based algorithms for the maximum flow problem can be modified to obtain algorithms for the BNFP with the property that these variations and the corresponding maximum flow algorithms have identical worst case time complexity. On unit capacity network we show that BNFP can be solved in $O(\min \{{m(n\log n)}^{{2/3}}, m^{{3/2}}\sqrt{\log n}\})$. This improves the best available algorithm by a factor of $\sqrt{\log n}$. On unit capacity simple graphs, we show that BNFP can be solved in $O(m \sqrt {n \log n})$ time. As a consequence we have an $O(m \sqrt {n \log n})$ algorithm for the BTP with unit arc capacities.
Bottleneck flows in networks
4,112
Group testing is a long studied problem in combinatorics: A small set of $r$ ill people should be identified out of the whole ($n$ people) by using only queries (tests) of the form "Does set X contain an ill human?". In this paper we provide an explicit construction of a testing scheme which is better (smaller) than any known explicit construction. This scheme has $\bigT{\min[r^2 \ln n,n]}$ tests which is as many as the best non-explicit schemes have. In our construction we use a fact that may have a value by its own right: Linear error-correction codes with parameters $[m,k,\delta m]_q$ meeting the Gilbert-Varshamov bound may be constructed quite efficiently, in $\bigT{q^km}$ time.
Explicit Non-Adaptive Combinatorial Group Testing Schemes
4,113
This paper presents a new technique for deterministic length reduction. This technique improves the running time of the algorithm presented in \cite{LR07} for performing fast convolution in sparse data. While the regular fast convolution of vectors $V_1,V_2$ whose sizes are $N_1,N_2$ respectively, takes $O(N_1 \log N_2)$ using FFT, using the new technique for length reduction, the algorithm proposed in \cite{LR07} performs the convolution in $O(n_1 \log^3 n_1)$, where $n_1$ is the number of non-zero values in $V_1$. The algorithm assumes that $V_1$ is given in advance, and $V_2$ is given in running time. The novel technique presented in this paper improves the convolution time to $O(n_1 \log^2 n_1)$ {\sl deterministically}, which equals the best running time given achieved by a {\sl randomized} algorithm. The preprocessing time of the new technique remains the same as the preprocessing time of \cite{LR07}, which is $O(n_1^2)$. This assumes and deals the case where $N_1$ is polynomial in $n_1$. In the case where $N_1$ is exponential in $n_1$, a reduction to a polynomial case can be used. In this paper we also improve the preprocessing time of this reduction from $O(n_1^4)$ to $O(n_1^3{\rm polylog}(n_1))$.
Improved Deterministic Length Reduction
4,114
Many applications like pointer analysis and incremental compilation require maintaining a topological ordering of the nodes of a directed acyclic graph (DAG) under dynamic updates. All known algorithms for this problem are either only analyzed for worst-case insertion sequences or only evaluated experimentally on random DAGs. We present the first average-case analysis of online topological ordering algorithms. We prove an expected runtime of O(n^2 polylog(n)) under insertion of the edges of a complete DAG in a random order for the algorithms of Alpern et al. (SODA, 1990), Katriel and Bodlaender (TALG, 2006), and Pearce and Kelly (JEA, 2006). This is much less than the best known worst-case bound O(n^{2.75}) for this problem.
Average-Case Analysis of Online Topological Ordering
4,115
Let $T=t_0 ... t_{n-1}$ be a text and $P = p_0 ... p_{m-1}$ a pattern taken from some finite alphabet set $\Sigma$, and let $\dist$ be a metric on $\Sigma$. We consider the problem of calculating the sum of distances between the symbols of $P$ and the symbols of substrings of $T$ of length $m$ for all possible offsets. We present an $\epsilon$-approximation algorithm for this problem which runs in time $O(\frac{1}{\epsilon^2}n\cdot \mathrm{polylog}(n,\abs{\Sigma}))$
Approximating General Metric Distances Between a Pattern and a Text
4,116
We study data structures in the presence of adversarial noise. We want to encode a given object in a succinct data structure that enables us to efficiently answer specific queries about the object, even if the data structure has been corrupted by a constant fraction of errors. This new model is the common generalization of (static) data structures and locally decodable error-correcting codes. The main issue is the tradeoff between the space used by the data structure and the time (number of probes) needed to answer a query about the encoded object. We prove a number of upper and lower bounds on various natural error-correcting data structure problems. In particular, we show that the optimal length of error-correcting data structures for the Membership problem (where we want to store subsets of size s from a universe of size n) is closely related to the optimal length of locally decodable codes for s-bit strings.
Error-Correcting Data Structures
4,117
We consider online competitive algorithms for the problem of collecting weighted items from a dynamic set S, when items are added to or deleted from S over time. The objective is to maximize the total weight of collected items. We study the general version, as well as variants with various restrictions, including the following: the uniform case, when all items have the same weight, the decremental sets, when all items are present at the beginning and only deletion operations are allowed, and dynamic queues, where the dynamic set is ordered and only its prefixes can be deleted (with no restriction on insertions). The dynamic queue case is a generalization of bounded-delay packet scheduling (also referred to as buffer management). We present several upper and lower bounds on the competitive ratio for these variants.
Generalized Whac-a-Mole
4,118
Covering problems are fundamental classical problems in optimization, computer science and complexity theory. Typically an input to these problems is a family of sets over a finite universe and the goal is to cover the elements of the universe with as few sets of the family as possible. The variations of covering problems include well known problems like Set Cover, Vertex Cover, Dominating Set and Facility Location to name a few. Recently there has been a lot of study on partial covering problems, a natural generalization of covering problems. Here, the goal is not to cover all the elements but to cover the specified number of elements with the minimum number of sets. In this paper we study partial covering problems in graphs in the realm of parameterized complexity. Classical (non-partial) version of all these problems have been intensively studied in planar graphs and in graphs excluding a fixed graph $H$ as a minor. However, the techniques developed for parameterized version of non-partial covering problems cannot be applied directly to their partial counterparts. The approach we use, to show that various partial covering problems are fixed parameter tractable on planar graphs, graphs of bounded local treewidth and graph excluding some graph as a minor, is quite different from previously known techniques. The main idea behind our approach is the concept of implicit branching. We find implicit branching technique to be interesting on its own and believe that it can be used for some other problems.
Parameterized Algorithms for Partial Cover Problems
4,119
We introduce a parameterized version of set cover that generalizes several previously studied problems. Given a ground set V and a collection of subsets S_i of V, a feasible solution is a partition of V such that each subset of the partition is included in one of the S_i. The problem involves maximizing the mean subset size of the partition, where the mean is the generalized mean of parameter p, taken over the elements. For p=-1, the problem is equivalent to the classical minimum set cover problem. For p=0, it is equivalent to the minimum entropy set cover problem, introduced by Halperin and Karp. For p=1, the problem includes the maximum-edge clique partition problem as a special case. We prove that the greedy algorithm simultaneously approximates the problem within a factor of (p+1)^1/p for any p in R^+, and that this is the best possible unless P=NP. These results both generalize and simplify previous results for special cases. We also consider the corresponding graph coloring problem, and prove several tractability and inapproximability results. Finally, we consider a further generalization of the set cover problem in which we aim at minimizing the sum of some concave function of the part sizes. As an application, we derive an approximation ratio for a Rent-or-Buy set cover problem.
Set Covering Problems with General Objective Functions
4,120
In the k-2VC problem, we are given an undirected graph G with edge costs and an integer k; the goal is to find a minimum-cost 2-vertex-connected subgraph of G containing at least k vertices. A slightly more general version is obtained if the input also specifies a subset S \subseteq V of terminals and the goal is to find a subgraph containing at least k terminals. Closely related to the k-2VC problem, and in fact a special case of it, is the k-2EC problem, in which the goal is to find a minimum-cost 2-edge-connected subgraph containing k vertices. The k-2EC problem was introduced by Lau et al., who also gave a poly-logarithmic approximation for it. No previous approximation algorithm was known for the more general k-2VC problem. We describe an O(\log n \log k) approximation for the k-2VC problem.
Min-Cost 2-Connected Subgraphs With k Terminals
4,121
The measure and conquer approach has proven to be a powerful tool to analyse exact algorithms for combinatorial problems, like Dominating Set and Independent Set. In this paper, we propose to use measure and conquer also as a tool in the design of algorithms. In an iterative process, we can obtain a series of branch and reduce algorithms. A mathematical analysis of an algorithm in the series with measure and conquer results in a quasiconvex programming problem. The solution by computer to this problem not only gives a bound on the running time, but also can give a new reduction rule, thus giving a new, possibly faster algorithm. This makes design by measure and conquer a form of computer aided algorithm design. When we apply the methodology to a Set Cover modelling of the Dominating Set problem, we obtain the currently fastest known exact algorithms for Dominating Set: an algorithm that uses $O(1.5134^n)$ time and polynomial space, and an algorithm that uses $O(1.5063^n)$ time.
Design by Measure and Conquer, A Faster Exact Algorithm for Dominating Set
4,122
In the Multislope Ski Rental problem, the user needs a certain resource for some unknown period of time. To use the resource, the user must subscribe to one of several options, each of which consists of a one-time setup cost (``buying price''), and cost proportional to the duration of the usage (``rental rate''). The larger the price, the smaller the rent. The actual usage time is determined by an adversary, and the goal of an algorithm is to minimize the cost by choosing the best option at any point in time. Multislope Ski Rental is a natural generalization of the classical Ski Rental problem (where the only options are pure rent and pure buy), which is one of the fundamental problems of online computation. The Multislope Ski Rental problem is an abstraction of many problems where online decisions cannot be modeled by just two options, e.g., power management in systems which can be shut down in parts. In this paper we study randomized algorithms for Multislope Ski Rental. Our results include the best possible online randomized strategy for any additive instance, where the cost of switching from one option to another is the difference in their buying prices; and an algorithm that produces an $e$-competitive randomized strategy for any (non-additive) instance.
Rent, Lease or Buy: Randomized Algorithms for Multislope Ski Rental
4,123
We provide the first non-trivial result on dynamic breadth-first search (BFS) in external-memory: For general sparse undirected graphs of initially $n$ nodes and O(n) edges and monotone update sequences of either $\Theta(n)$ edge insertions or $\Theta(n)$ edge deletions, we prove an amortized high-probability bound of $O(n/B^{2/3}+\sort(n)\cdot \log B)$ I/Os per update. In contrast, the currently best approach for static BFS on sparse undirected graphs requires $\Omega(n/B^{1/2}+\sort(n))$ I/Os.
On Dynamic Breadth-First Search in External-Memory
4,124
We study the scheduling problem on unrelated machines in the mechanism design setting. This problem was proposed and studied in the seminal paper (Nisan and Ronen 1999), where they gave a 1.75-approximation randomized truthful mechanism for the case of two machines. We improve this result by a 1.6737-approximation randomized truthful mechanism. We also generalize our result to a $0.8368m$-approximation mechanism for task scheduling with $m$ machines, which improve the previous best upper bound of $0.875m(Mu'alem and Schapira 2007).
An Improved Randomized Truthful Mechanism for Scheduling Unrelated Machines
4,125
We analyze a simple random process in which a token is moved in the interval $A=\{0,...,n\$: Fix a probability distribution $\mu$ over $\{1,...,n\$. Initially, the token is placed in a random position in $A$. In round $t$, a random value $d$ is chosen according to $\mu$. If the token is in position $a\geq d$, then it is moved to position $a-d$. Otherwise it stays put. Let $T$ be the number of rounds until the token reaches position 0. We show tight bounds for the expectation of $T$ for the optimal distribution $\mu$. More precisely, we show that $\min_\mu\{E_\mu(T)\=\Theta((\log n)^2)$. For the proof, a novel potential function argument is introduced. The research is motivated by the problem of approximating the minimum of a continuous function over $[0,1]$ with a ``blind'' optimization strategy.
Tight Bounds for Blind Search on the Integers
4,126
We consider the minimum spanning tree problem in a setting where information about the edge weights of the given graph is uncertain. Initially, for each edge $e$ of the graph only a set $A_e$, called an uncertainty area, that contains the actual edge weight $w_e$ is known. The algorithm can `update' $e$ to obtain the edge weight $w_e \in A_e$. The task is to output the edge set of a minimum spanning tree after a minimum number of updates. An algorithm is $k$-update competitive if it makes at most $k$ times as many updates as the optimum. We present a 2-update competitive algorithm if all areas $A_e$ are open or trivial, which is the best possible among deterministic algorithms. The condition on the areas $A_e$ is to exclude degenerate inputs for which no constant update competitive algorithm can exist. Next, we consider a setting where the vertices of the graph correspond to points in Euclidean space and the weight of an edge is equal to the distance of its endpoints. The location of each point is initially given as an uncertainty area, and an update reveals the exact location of the point. We give a general relation between the edge uncertainty and the vertex uncertainty versions of a problem and use it to derive a 4-update competitive algorithm for the minimum spanning tree problem in the vertex uncertainty model. Again, we show that this is best possible among deterministic algorithms.
Computing Minimum Spanning Trees with Uncertainty
4,127
We consider the problem of constructing bounded-degree planar geometric spanners of Euclidean and unit-disk graphs. It is well known that the Delaunay subgraph is a planar geometric spanner with stretch factor $C_{del\approx 2.42$; however, its degree may not be bounded. Our first result is a very simple linear time algorithm for constructing a subgraph of the Delaunay graph with stretch factor $\rho =1+2\pi(k\cos{\frac{\pi{k)^{-1$ and degree bounded by $k$, for any integer parameter $k\geq 14$. This result immediately implies an algorithm for constructing a planar geometric spanner of a Euclidean graph with stretch factor $\rho \cdot C_{del$ and degree bounded by $k$, for any integer parameter $k\geq 14$. Moreover, the resulting spanner contains a Euclidean Minimum Spanning Tree (EMST) as a subgraph. Our second contribution lies in developing the structural results necessary to transfer our analysis and algorithm from Euclidean graphs to unit disk graphs, the usual model for wireless ad-hoc networks. We obtain a very simple distributed, {\em strictly-localized algorithm that, given a unit disk graph embedded in the plane, constructs a geometric spanner with the above stretch factor and degree bound, and also containing an EMST as a subgraph. The obtained results dramatically improve the previous results in all aspects, as shown in the paper.
On Geometric Spanners of Euclidean and Unit Disk Graphs
4,128
Consider a set of labels $L$ and a set of trees ${\mathcal T} = \{{\mathcal T}^{(1), {\mathcal T}^{(2), ..., {\mathcal T}^{(k) \$ where each tree ${\mathcal T}^{(i)$ is distinctly leaf-labeled by some subset of $L$. One fundamental problem is to find the biggest tree (denoted as supertree) to represent $\mathcal T}$ which minimizes the disagreements with the trees in ${\mathcal T}$ under certain criteria. This problem finds applications in phylogenetics, database, and data mining. In this paper, we focus on two particular supertree problems, namely, the maximum agreement supertree problem (MASP) and the maximum compatible supertree problem (MCSP). These two problems are known to be NP-hard for $k \geq 3$. This paper gives the first polynomial time algorithms for both MASP and MCSP when both $k$ and the maximum degree $D$ of the trees are constant.
Fixed Parameter Polynomial Time Algorithms for Maximum Agreement and Compatible Supertrees
4,129
We propose a dynamical process for network evolution, aiming at explaining the emergence of the small world phenomenon, i.e., the statistical observation that any pair of individuals are linked by a short chain of acquaintances computable by a simple decentralized routing algorithm, known as greedy routing. Previously proposed dynamical processes enabled to demonstrate experimentally (by simulations) that the small world phenomenon can emerge from local dynamics. However, the analysis of greedy routing using the probability distributions arising from these dynamics is quite complex because of mutual dependencies. In contrast, our process enables complete formal analysis. It is based on the combination of two simple processes: a random walk process, and an harmonic forgetting process. Both processes reflect natural behaviors of the individuals, viewed as nodes in the network of inter-individual acquaintances. We prove that, in k-dimensional lattices, the combination of these two processes generates long-range links mutually independently distributed as a k-harmonic distribution. We analyze the performances of greedy routing at the stationary regime of our process, and prove that the expected number of steps for routing from any source to any target in any multidimensional lattice is a polylogarithmic function of the distance between the two nodes in the lattice. Up to our knowledge, these results are the first formal proof that navigability in small worlds can emerge from a dynamical process for network evolution. Our dynamical process can find practical applications to the design of spatial gossip and resource location protocols.
Networks become navigable as nodes move and forget
4,130
From a high volume stream of weighted items, we want to maintain a generic sample of a certain limited size $k$ that we can later use to estimate the total weight of arbitrary subsets. This is the classic context of on-line reservoir sampling, thinking of the generic sample as a reservoir. We present an efficient reservoir sampling scheme, $\varoptk$, that dominates all previous schemes in terms of estimation quality. $\varoptk$ provides {\em variance optimal unbiased estimation of subset sums}. More precisely, if we have seen $n$ items of the stream, then for {\em any} subset size $m$, our scheme based on $k$ samples minimizes the average variance over all subsets of size $m$. In fact, the optimality is against any off-line scheme with $k$ samples tailored for the concrete set of items seen. In addition to optimal average variance, our scheme provides tighter worst-case bounds on the variance of {\em particular} subsets than previously possible. It is efficient, handling each new item of the stream in $O(\log k)$ time. Finally, it is particularly well suited for combination of samples from different streams in a distributed setting.
Stream sampling for variance-optimal estimation of subset sums
4,131
We present an on-line algorithm for maintaining a topological order of a directed acyclic graph as arcs are added, and detecting a cycle when one is created. Our algorithm takes O(m^{1/2}) amortized time per arc, where m is the total number of arcs. For sparse graphs, this bound improves the best previous bound by a logarithmic factor and is tight to within a constant factor for a natural class of algorithms that includes all the existing ones. Our main insight is that the bidirectional search method of previous algorithms does not require an ordered search, but can be more general. This allows us to avoid the use of heaps (priority queues) entirely. Instead, the deterministic version of our algorithm uses (approximate) median-finding. The randomized version of our algorithm avoids this complication, making it very simple. We extend our topological ordering algorithm to give the first detailed algorithm for maintaining the strong components of a directed graph, and a topological order of these components, as arcs are added. This extension also has an amortized time bound of O(m^{1/2}) per arc.
Incremental Topological Ordering and Strong Component Maintenance
4,132
We present a nearly-linear time algorithm that produces high-quality sparsifiers of weighted graphs. Given as input a weighted graph $G=(V,E,w)$ and a parameter $\epsilon>0$, we produce a weighted subgraph $H=(V,\tilde{E},\tilde{w})$ of $G$ such that $|\tilde{E}|=O(n\log n/\epsilon^2)$ and for all vectors $x\in\R^V$ $(1-\epsilon)\sum_{uv\in E}(x(u)-x(v))^2w_{uv}\le \sum_{uv\in\tilde{E}}(x(u)-x(v))^2\tilde{w}_{uv} \le (1+\epsilon)\sum_{uv\in E}(x(u)-x(v))^2w_{uv}. (*)$ This improves upon the sparsifiers constructed by Spielman and Teng, which had $O(n\log^c n)$ edges for some large constant $c$, and upon those of Bencz\'ur and Karger, which only satisfied (*) for $x\in\{0,1\}^V$. A key ingredient in our algorithm is a subroutine of independent interest: a nearly-linear time algorithm that builds a data structure from which we can query the approximate effective resistance between any two vertices in a graph in $O(\log n)$ time.
Graph Sparsification by Effective Resistances
4,133
For a given graph G and integers b,f >= 0, let S be a subset of vertices of G of size b+1 such that the subgraph of G induced by S is connected and S can be separated from other vertices of G by removing f vertices. We prove that every graph on n vertices contains at most n\binom{b+f}{b} such vertex subsets. This result from extremal combinatorics appears to be very useful in the design of several enumeration and exact algorithms. In particular, we use it to provide algorithms that for a given n-vertex graph G - compute the treewidth of G in time O(1.7549^n) by making use of exponential space and in time O(2.6151^n) and polynomial space; - decide in time O(({\frac{2n+k+1}{3})^{k+1}\cdot kn^6}) if the treewidth of G is at most k; - list all minimal separators of G in time O(1.6181^n) and all potential maximal cliques of G in time O(1.7549^n). This significantly improves previous algorithms for these problems.
Treewidth computation and extremal combinatorics
4,134
This article provides an overview of the performance and the theoretical complexity of approximate and exact methods for various versions of the shortest path problem. The proposed study aims to improve the resolution of a more general covering problem within a column generation scheme in which the shortest path problem is the sub-problem.
Rapport de recherche sur le problème du plus court chemin contraint
4,135
We study the admission control problem in general networks. Communication requests arrive over time, and the online algorithm accepts or rejects each request while maintaining the capacity limitations of the network. The admission control problem has been usually analyzed as a benefit problem, where the goal is to devise an online algorithm that accepts the maximum number of requests possible. The problem with this objective function is that even algorithms with optimal competitive ratios may reject almost all of the requests, when it would have been possible to reject only a few. This could be inappropriate for settings in which rejections are intended to be rare events. In this paper, we consider preemptive online algorithms whose goal is to minimize the number of rejected requests. Each request arrives together with the path it should be routed on. We show an $O(\log^2 (mc))$-competitive randomized algorithm for the weighted case, where $m$ is the number of edges in the graph and $c$ is the maximum edge capacity. For the unweighted case, we give an $O(\log m \log c)$-competitive randomized algorithm. This settles an open question of Blum, Kalai and Kleinberg raised in \cite{BlKaKl01}. We note that allowing preemption and handling requests with given paths are essential for avoiding trivial lower bounds.
Admission Control to Minimize Rejections and Online Set Cover with Repetitions
4,136
Numerous studies show that most known real-world complex networks share similar properties in their connectivity and degree distribution. They are called small worlds. This article gives a method to turn random graphs into Small World graphs by the dint of random walks.
From Random Graph to Small World by Wandering
4,137
There exists an injective, information-preserving function that maps a semantic network (i.e a directed labeled network) to a directed network (i.e. a directed unlabeled network). The edge label in the semantic network is represented as a topological feature of the directed network. Also, there exists an injective function that maps a directed network to an undirected network (i.e. an undirected unlabeled network). The edge directionality in the directed network is represented as a topological feature of the undirected network. Through function composition, there exists an injective function that maps a semantic network to an undirected network. Thus, aside from space constraints, the semantic network construct does not have any modeling functionality that is not possible with either a directed or undirected network representation. Two proofs of this idea will be presented. The first is a proof of the aforementioned function composition concept. The second is a simpler proof involving an undirected binary encoding of a semantic network.
Mapping Semantic Networks to Undirected Networks
4,138
The generalized traveling salesman problem (GTSP) is an extension of the well-known traveling salesman problem. In GTSP, we are given a partition of cities into groups and we are required to find a minimum length tour that includes exactly one city from each group. The recent studies on this subject consider different variations of a memetic algorithm approach to the GTSP. The aim of this paper is to present a new memetic algorithm for GTSP with a powerful local search procedure. The experiments show that the proposed algorithm clearly outperforms all of the known heuristics with respect to both solution quality and running time. While the other memetic algorithms were designed only for the symmetric GTSP, our algorithm can solve both symmetric and asymmetric instances.
A Memetic Algorithm for the Generalized Traveling Salesman Problem
4,139
The generalized traveling salesman problem (GTSP) is an extension of the well-known traveling salesman problem. In GTSP, we are given a partition of cities into groups and we are required to find a minimum length tour that includes exactly one city from each group. The aim of this paper is to present a problem reduction algorithm that deletes redundant vertices and edges, preserving the optimal solution. The algorithm's running time is O(N^3) in the worst case, but it is significantly faster in practice. The algorithm has reduced the problem size by 15-20% on average in our experiments and this has decreased the solution time by 10-60% for each of the considered solvers.
Generalized Traveling Salesman Problem Reduction Algorithms
4,140
Let X[0..n-1] and Y[0..m-1] be two sorted arrays, and define the mxn matrix A by A[j][i]=X[i]+Y[j]. Frederickson and Johnson gave an efficient algorithm for selecting the k-th smallest element from A. We show how to make this algorithm IO-efficient. Our cache-oblivious algorithm performs O((m+n)/B) IOs, where B is the block size of memory transfers.
Cache-Oblivious Selection in Sorted X+Y Matrices
4,141
The Hierarchical Memory Model (HMM) of computation is similar to the standard Random Access Machine (RAM) model except that the HMM has a non-uniform memory organized in a hierarchy of levels numbered 1 through h. The cost of accessing a memory location increases with the level number, and accesses to memory locations belonging to the same level cost the same. Formally, the cost of a single access to the memory location at address a is given by m(a), where m: N -> N is the memory cost function, and the h distinct values of m model the different levels of the memory hierarchy. We study the problem of constructing and storing a binary search tree (BST) of minimum cost, over a set of keys, with probabilities for successful and unsuccessful searches, on the HMM with an arbitrary number of memory levels, and for the special case h=2. While the problem of constructing optimum binary search trees has been well studied for the standard RAM model, the additional parameter m for the HMM increases the combinatorial complexity of the problem. We present two dynamic programming algorithms to construct optimum BSTs bottom-up. These algorithms run efficiently under some natural assumptions about the memory hierarchy. We also give an efficient algorithm to construct a BST that is close to optimum, by modifying a well-known linear-time approximation algorithm for the RAM model. We conjecture that the problem of constructing an optimum BST for the HMM with an arbitrary memory cost function m is NP-complete.
Optimum Binary Search Trees on the Hierarchical Memory Model
4,142
We consider the problem of computing L1-distances between every pair ofcprobability densities from a given family. We point out that the technique of Cauchy random projections (Indyk'06) in this context turns into stochastic integrals with respect to Cauchy motion. For piecewise-linear densities these integrals can be sampled from if one can sample from the stochastic integral of the function x->(1,x). We give an explicit density function for this stochastic integral and present an efficient sampling algorithm. As a consequence we obtain an efficient algorithm to approximate the L1-distances with a small relative error. For piecewise-polynomial densities we show how to approximately sample from the distributions resulting from the stochastic integrals. This also results in an efficient algorithm to approximate the L1-distances, although our inability to get exact samples worsens the dependence on the parameters.
Approximating L1-distances between mixture distributions using random projections
4,143
The Min Energy broadcast problem consists in assigning transmission ranges to the nodes of an ad-hoc network in order to guarantee a directed spanning tree from a given source node and, at the same time, to minimize the energy consumption (i.e. the energy cost) yielded by the range assignment. Min energy broadcast is known to be NP-hard. We consider random-grid networks where nodes are chosen independently at random from the $n$ points of a $\sqrt n \times \sqrt n$ square grid in the plane. The probability of the existence of a node at a given point of the grid does depend on that point, that is, the probability distribution can be non-uniform. By using information-theoretic arguments, we prove a lower bound $(1-\epsilon) \frac n{\pi}$ on the energy cost of any feasible solution for this problem. Then, we provide an efficient solution of energy cost not larger than $1.1204 \frac n{\pi}$. Finally, we present a fully-distributed protocol that constructs a broadcast range assignment of energy cost not larger than $8n$,thus still yielding constant approximation. The energy load is well balanced and, at the same time, the work complexity (i.e. the energy due to all message transmissions of the protocol) is asymptotically optimal. The completion time of the protocol is only an $O(\log n)$ factor slower than the optimum. The approximation quality of our distributed solution is also experimentally evaluated. All bounds hold with probability at least $1-1/n^{\Theta(1)}$.
Minimum-energy broadcast in random-grid ad-hoc networks: approximation and distributed algorithms
4,144
Contraction hierarchies are a simple hierarchical routing technique that has proved extremely efficient for static road networks. We explain how to generalize them to networks with time-dependent edge weights. This is the first hierarchical speedup technique for time-dependent routing that allows bidirectional query algorithms.
Time Dependent Contraction Hierarchies -- Basic Algorithmic Ideas
4,145
We conclude a sequence of work by giving near-optimal sketching and streaming algorithms for estimating Shannon entropy in the most general streaming model, with arbitrary insertions and deletions. This improves on prior results that obtain suboptimal space bounds in the general model, and near-optimal bounds in the insertion-only model without sketching. Our high-level approach is simple: we give algorithms to estimate Renyi and Tsallis entropy, and use them to extrapolate an estimate of Shannon entropy. The accuracy of our estimates is proven using approximation theory arguments and extremal properties of Chebyshev polynomials, a technique which may be useful for other problems. Our work also yields the best-known and near-optimal additive approximations for entropy, and hence also for conditional entropy and mutual information.
Sketching and Streaming Entropy via Approximation Theory
4,146
We study the minimum backlog problem (MBP). This online problem arises, e.g., in the context of sensor networks. We focus on two main variants of MBP. The discrete MBP is a 2-person game played on a graph $G=(V,E)$. The player is initially located at a vertex of the graph. In each time step, the adversary pours a total of one unit of water into cups that are located on the vertices of the graph, arbitrarily distributing the water among the cups. The player then moves from her current vertex to an adjacent vertex and empties the cup at that vertex. The player's objective is to minimize the backlog, i.e., the maximum amount of water in any cup at any time. The geometric MBP is a continuous-time version of the MBP: the cups are points in the two-dimensional plane, the adversary pours water continuously at a constant rate, and the player moves in the plane with unit speed. Again, the player's objective is to minimize the backlog. We show that the competitive ratio of any algorithm for the MBP has a lower bound of $\Omega(D)$, where $D$ is the diameter of the graph (for the discrete MBP) or the diameter of the point set (for the geometric MBP). Therefore we focus on determining a strategy for the player that guarantees a uniform upper bound on the absolute value of the backlog. For the absolute value of the backlog there is a trivial lower bound of $\Omega(D)$, and the deamortization analysis of Dietz and Sleator gives an upper bound of $O(D\log N)$ for $N$ cups. Our main result is a tight upper bound for the geometric MBP: we show that there is a strategy for the player that guarantees a backlog of $O(D)$, independently of the number of cups.
The Minimum Backlog Problem
4,147
We introduce several generalizations of classical computer science problems obtained by replacing simpler objective functions with general submodular functions. The new problems include submodular load balancing, which generalizes load balancing or minimum-makespan scheduling, submodular sparsest cut and submodular balanced cut, which generalize their respective graph cut problems, as well as submodular function minimization with a cardinality lower bound. We establish upper and lower bounds for the approximability of these problems with a polynomial number of queries to a function-value oracle. The approximation guarantees for most of our algorithms are of the order of sqrt(n/ln n). We show that this is the inherent difficulty of the problems by proving matching lower bounds. We also give an improved lower bound for the problem of approximately learning a monotone submodular function. In addition, we present an algorithm for approximately learning submodular functions with special structure, whose guarantee is close to the lower bound. Although quite restrictive, the class of functions with this structure includes the ones that are used for lower bounds both by us and in previous work. This demonstrates that if there are significantly stronger lower bounds for this problem, they rely on more general submodular functions.
Submodular approximation: sampling-based algorithms and lower bounds
4,148
The Noah's Ark Problem (NAP) is an NP-Hard optimization problem with relevance to ecological conservation management. It asks to maximize the phylogenetic diversity (PD) of a set of taxa given a fixed budget, where each taxon is associated with a cost of conservation and a probability of extinction. NAP has received renewed interest with the rise in availability of genetic sequence data, allowing PD to be used as a practical measure of biodiversity. However, only simplified instances of the problem, where one or more parameters are fixed as constants, have as of yet been addressed in the literature. We present NAPX, the first algorithm for the general version of NAP that returns a $1 - \epsilon$ approximation of the optimal solution. It runs in $O(\frac{n B^2 h^2 \log^2n}{\log^2(1 - \epsilon)})$ time where $n$ is the number of species, and $B$ is the total budget and $h$ is the height of the input tree. We also provide improved bounds for its expected running time.
NAPX: A Polynomial Time Approximation Scheme for the Noah's Ark Problem
4,149
The celebrated multi-armed bandit problem in decision theory models the basic trade-off between exploration, or learning about the state of a system, and exploitation, or utilizing the system. In this paper we study the variant of the multi-armed bandit problem where the exploration phase involves costly experiments and occurs before the exploitation phase; and where each play of an arm during the exploration phase updates a prior belief about the arm. The problem of finding an inexpensive exploration strategy to optimize a certain exploitation objective is NP-Hard even when a single play reveals all information about an arm, and all exploration steps cost the same. We provide the first polynomial time constant-factor approximation algorithm for this class of problems. We show that this framework also generalizes several problems of interest studied in the context of data acquisition in sensor networks. Our analyses also extends to switching and setup costs, and to concave utility objectives. Our solution approach is via a novel linear program rounding technique based on stochastic packing. In addition to yielding exploration policies whose performance is within a small constant factor of the adaptive optimal policy, a nice feature of this approach is that the resulting policies explore the arms sequentially without revisiting any arm. Sequentiality is a well-studied concept in decision theory, and is very desirable in domains where multiple explorations can be conducted in parallel, for instance, in the sensor network context.
Sequential Design of Experiments via Linear Programming
4,150
We investigate the problem of computing a minimum set of solutions that approximates within a specified accuracy $\epsilon$ the Pareto curve of a multiobjective optimization problem. We show that for a broad class of bi-objective problems (containing many important widely studied problems such as shortest paths, spanning tree, and many others), we can compute in polynomial time an $\epsilon$-Pareto set that contains at most twice as many solutions as the minimum such set. Furthermore we show that the factor of 2 is tight for these problems, i.e., it is NP-hard to do better. We present upper and lower bounds for three or more objectives, as well as for the dual problem of computing a specified number $k$ of solutions which provide a good approximation to the Pareto curve.
Small Approximate Pareto Sets for Bi-objective Shortest Paths and Other Problems
4,151
This paper provides a systematic study of several proposed measures for online algorithms in the context of a specific problem, namely, the two server problem on three colinear points. Even though the problem is simple, it encapsulates a core challenge in online algorithms which is to balance greediness and adaptability. We examine Competitive Analysis, the Max/Max Ratio, the Random Order Ratio, Bijective Analysis and Relative Worst Order Analysis, and determine how these measures compare the Greedy Algorithm, Double Coverage, and Lazy Double Coverage, commonly studied algorithms in the context of server problems. We find that by the Max/Max Ratio and Bijective Analysis, Greedy is the best of the three algorithms. Under the other measures, Double Coverage and Lazy Double Coverage are better, though Relative Worst Order Analysis indicates that Greedy is sometimes better. Only Bijective Analysis and Relative Worst Order Analysis indicate that Lazy Double Coverage is better than Double Coverage. Our results also provide the first proof of optimality of an algorithm under Relative Worst Order Analysis.
A Comparison of Performance Measures for Online Algorithms
4,152
In this paper, some issues concerning the Chinese remaindering representation are discussed. Some new converting methods, including an efficient probabilistic algorithm based on a recent result of von zur Gathen and Shparlinski \cite{Gathen-Shparlinski}, are described. An efficient refinement of the NC$^1$ division algorithm of Chiu, Davida and Litow \cite{Chiu-Davida-Litow} is given, where the number of moduli is reduced by a factor of $\log n$.
Fast Arithmetics Using Chinese Remaindering
4,153
It is known that if a 2-universal hash function $H$ is applied to elements of a {\em block source} $(X_1,...,X_T)$, where each item $X_i$ has enough min-entropy conditioned on the previous items, then the output distribution $(H,H(X_1),...,H(X_T))$ will be ``close'' to the uniform distribution. We provide improved bounds on how much min-entropy per item is required for this to hold, both when we ask that the output be close to uniform in statistical distance and when we only ask that it be statistically close to a distribution with small collision probability. In both cases, we reduce the dependence of the min-entropy on the number $T$ of items from $2\log T$ in previous work to $\log T$, which we show to be optimal. This leads to corresponding improvements to the recent results of Mitzenmacher and Vadhan (SODA `08) on the analysis of hashing-based algorithms and data structures when the data items come from a block source.
Tight Bounds for Hashing Block Sources
4,154
We describe a new approximation algorithm for Max Cut. Our algorithm runs in $\tilde O(n^2)$ time, where $n$ is the number of vertices, and achieves an approximation ratio of $.531$. On instances in which an optimal solution cuts a $1-\epsilon$ fraction of edges, our algorithm finds a solution that cuts a $1-4\sqrt{\epsilon} + 8\epsilon-o(1)$ fraction of edges. Our main result is a variant of spectral partitioning, which can be implemented in nearly linear time. Given a graph in which the Max Cut optimum is a $1-\epsilon$ fraction of edges, our spectral partitioning algorithm finds a set $S$ of vertices and a bipartition $L,R=S-L$ of $S$ such that at least a $1-O(\sqrt \epsilon)$ fraction of the edges incident on $S$ have one endpoint in $L$ and one endpoint in $R$. (This can be seen as an analog of Cheeger's inequality for the smallest eigenvalue of the adjacency matrix of a graph.) Iterating this procedure yields the approximation results stated above. A different, more complicated, variant of spectral partitioning leads to an $\tilde O(n^3)$ time algorithm that cuts $1/2 + e^{-\Omega(1/\eps)}$ fraction of edges in graphs in which the optimum is $1/2 + \epsilon$.
Max Cut and the Smallest Eigenvalue
4,155
The Multidimensional Assignment Problem (MAP) (abbreviated s-AP in the case of s dimensions) is an extension of the well-known assignment problem. The most studied case of MAP is 3-AP, though the problems with larger values of s also have a large number of applications. We consider several known neighborhoods, generalize them and propose some new ones. The heuristics are evaluated both theoretically and experimentally and dominating algorithms are selected. We also demonstrate a combination of two neighborhoods may yield a heuristics which is superior to both of its components.
Local Search Heuristics For The Multidimensional Assignment Problem
4,156
We present randomized approximation algorithms for multi-criteria Max-TSP. For Max-STSP with k > 1 objective functions, we obtain an approximation ratio of $1/k - \eps$ for arbitrarily small $\eps > 0$. For Max-ATSP with k objective functions, we obtain an approximation ratio of $1/(k+1) - \eps$.
Approximating Multi-Criteria Max-TSP
4,157
In their seminal work, Alon, Matias, and Szegedy introduced several sketching techniques, including showing that 4-wise independence is sufficient to obtain good approximations of the second frequency moment. In this work, we show that their sketching technique can be extended to product domains $[n]^k$ by using the product of 4-wise independent functions on $[n]$. Our work extends that of Indyk and McGregor, who showed the result for $k = 2$. Their primary motivation was the problem of identifying correlations in data streams. In their model, a stream of pairs $(i,j) \in [n]^2$ arrive, giving a joint distribution $(X,Y)$, and they find approximation algorithms for how close the joint distribution is to the product of the marginal distributions under various metrics, which naturally corresponds to how close $X$ and $Y$ are to being independent. By using our technique, we obtain a new result for the problem of approximating the $\ell_2$ distance between the joint distribution and the product of the marginal distributions for $k$-ary vectors, instead of just pairs, in a single pass. Our analysis gives a randomized algorithm that is a $(1 \pm \epsilon)$ approximation (with probability $1-\delta$) that requires space logarithmic in $n$ and $m$ and proportional to $3^k$.
AMS Without 4-Wise Independence on Product Domains
4,158
A Bloom filter is a space efficient structure for storing static sets, where the space efficiency is gained at the expense of a small probability of false-positives. A Bloomier filter generalizes a Bloom filter to compactly store a function with a static support. In this article we give a simple construction of a Bloomier filter. The construction is linear in space and requires constant time to evaluate. The creation of our Bloomier filter takes linear time which is faster than the existing construction. We show how one can improve the space utilization further at the cost of increasing the time for creating the data structure.
Bloomier Filters: A second look
4,159
We examine several online matching problems, with applications to Internet advertising reservation systems. Consider an edge-weighted bipartite graph G, with partite sets L, R. We develop an 8-competitive algorithm for the following secretary problem: Initially given R, and the size of L, the algorithm receives the vertices of L sequentially, in a random order. When a vertex l \in L is seen, all edges incident to l are revealed, together with their weights. The algorithm must immediately either match l to an available vertex of R, or decide that l will remain unmatched. Dimitrov and Plaxton show a 16-competitive algorithm for the transversal matroid secretary problem, which is the special case with weights on vertices, not edges. (Equivalently, one may assume that for each l \in L, the weights on all edges incident to l are identical.) We use a similar algorithm, but simplify and improve the analysis to obtain a better competitive ratio for the more general problem. Perhaps of more interest is the fact that our analysis is easily extended to obtain competitive algorithms for similar problems, such as to find disjoint sets of edges in hypergraphs where edges arrive online. We also introduce secretary problems with adversarially chosen groups. Finally, we give a 2e-competitive algorithm for the secretary problem on graphic matroids, where, with edges appearing online, the goal is to find a maximum-weight acyclic subgraph of a given graph.
Algorithms for Secretary Problems on Graphs and Hypergraphs
4,160
In this paper two scheduling models are addressed. First is the standard model (unicast) where requests (or jobs) are independent. The other is the broadcast model where broadcasting a page can satisfy multiple outstanding requests for that page. We consider online scheduling of requests when they have deadlines. Unlike previous models, which mainly consider the objective of maximizing throughput while respecting deadlines, here we focus on scheduling all the given requests with the goal of minimizing the maximum {\em delay factor}.We prove strong lower bounds on the achievable competitive ratios for delay factor scheduling even with unit-time requests.For the unicast model we give algorithms that are $(1 + \eps)$-speed $O({1 \over \eps})$-competitive in both the single machine and multiple machine settings. In the broadcast model we give an algorithm for similar-sized pages that is $(2+ \eps)$-speed $O({1 \over \eps^2})$-competitive. For arbitrary page sizes we give an algorithm that is $(4+\eps)$-speed $O({1 \over \eps^2})$-competitive.
Online Scheduling to Minimize the Maximum Delay Factor
4,161
Motivated by the Quality-of-Service (QoS) buffer management problem, we consider online scheduling of packets with hard deadlines in a finite capacity queue. At any time, a queue can store at most $b \in \mathbb Z^+$ packets. Packets arrive over time. Each packet is associated with a non-negative value and an integer deadline. In each time step, only one packet is allowed to be sent. Our objective is to maximize the total value gained by the packets sent by their deadlines in an online manner. Due to the Internet traffic's chaotic characteristics, no stochastic assumptions are made on the packet input sequences. This model is called a {\em finite-queue model}. We use competitive analysis to measure an online algorithm's performance versus an unrealizable optimal offline algorithm who constructs the worst possible input based on the knowledge of the online algorithm. For the finite-queue model, we first present a deterministic 3-competitive memoryless online algorithm. Then, we give a randomized ($\phi^2 = ((1 + \sqrt{5}) / 2)^2 \approx 2.618$)-competitive memoryless online algorithm. The algorithmic framework and its theoretical analysis include several interesting features. First, our algorithms use (possibly) modified characteristics of packets; these characteristics may not be same as those specified in the input sequence. Second, our analysis method is different from the classical potential function approach.
Algorithms for Scheduling Weighted Packets with Deadlines in a Bounded Queue
4,162
The problem of approximate string matching is important in many different areas such as computational biology, text processing and pattern recognition. A great effort has been made to design efficient algorithms addressing several variants of the problem, including comparison of two strings, approximate pattern identification in a string or calculation of the longest common subsequence that two strings share. We designed an output sensitive algorithm solving the edit distance problem between two strings of lengths n and m respectively in time O((s-|n-m|)min(m,n,s)+m+n) and linear space, where s is the edit distance between the two strings. This worst-case time bound sets the quadratic factor of the algorithm independent of the longest string length and improves existing theoretical bounds for this problem. The implementation of our algorithm excels also in practice, especially in cases where the two strings compared differ significantly in length. Source code of our algorithm is available at http://www.cs.miami.edu/\~dimitris/edit_distance
Improved Algorithms for Approximate String Matching (Extended Abstract)
4,163
Finding the largest clique is a notoriously hard problem, even on random graphs. It is known that the clique number of a random graph G(n,1/2) is almost surely either k or k+1, where k = 2log n - 2log(log n) - 1. However, a simple greedy algorithm finds a clique of size only (1+o(1))log n, with high probability, and finding larger cliques -- that of size even (1+ epsilon)log n -- in randomized polynomial time has been a long-standing open problem. In this paper, we study the following generalization: given a random graph G(n,1/2), find the largest subgraph with edge density at least (1-delta). We show that a simple modification of the greedy algorithm finds a subset of 2log n vertices whose induced subgraph has edge density at least 0.951, with high probability. To complement this, we show that almost surely there is no subset of 2.784log n vertices whose induced subgraph has edge density 0.951 or more.
Finding Dense Subgraphs in G(n,1/2)
4,164
We describe Haskell implementations of interesting combinatorial generation algorithms with focus on boolean functions and logic circuit representations. First, a complete exact combinational logic circuit synthesizer is described as a combination of catamorphisms and anamorphisms. Using pairing and unpairing functions on natural number representations of truth tables, we derive an encoding for Binary Decision Diagrams (BDDs) with the unique property that its boolean evaluation faithfully mimics its structural conversion to a a natural number through recursive application of a matching pairing function. We then use this result to derive ranking and unranking functions for BDDs and reduced BDDs. Finally, a generalization of the encoding techniques to Multi-Terminal BDDs is provided. The paper is organized as a self-contained literate Haskell program, available at http://logic.csci.unt.edu/tarau/research/2008/fBDD.zip . Keywords: exact combinational logic synthesis, binary decision diagrams, encodings of boolean functions, pairing/unpairing functions, ranking/unranking functions for BDDs and MTBDDs, declarative combinatorics in Haskell
Declarative Combinatorics: Boolean Functions, Circuit Synthesis and BDDs in Haskell
4,165
We investigate effects of ordering in blocked matrix--matrix multiplication. We find that submatrices do not have to be stored contiguously in memory to achieve near optimal performance. Instead it is the choice of execution order of the submatrix multiplications that leads to a speedup of up to four times for small block sizes. This is in contrast to results for single matrix elements showing that contiguous memory allocation quickly becomes irrelevant as the blocksize increases.
Cache oblivious storage and access heuristics for blocked matrix-matrix multiplication
4,166
In this paper we consider two problems regarding the scheduling of available personnel in order to perform a given quantity of work, which can be arbitrarily decomposed into a sequence of activities. We are interested in schedules which minimize the overall dissatisfaction, where each employee's dissatisfaction is modeled as a time-dependent linear function. For the two situations considered we provide a detailed mathematical analysis, as well as efficient algorithms for determining optimal schedules.
Minimum Dissatisfaction Personnel Scheduling
4,167
Compressed Counting (CC) was recently proposed for very efficiently computing the (approximate) $\alpha$th frequency moments of data streams, where $0<\alpha <= 2$. Several estimators were reported including the geometric mean estimator, the harmonic mean estimator, the optimal power estimator, etc. The geometric mean estimator is particularly interesting for theoretical purposes. For example, when $\alpha -> 1$, the complexity of CC (using the geometric mean estimator) is $O(1/\epsilon)$, breaking the well-known large-deviation bound $O(1/\epsilon^2)$. The case $\alpha\approx 1$ has important applications, for example, computing entropy of data streams. For practical purposes, this study proposes the optimal quantile estimator. Compared with previous estimators, this estimator is computationally more efficient and is also more accurate when $\alpha> 1$.
The Optimal Quantile Estimator for Compressed Counting
4,168
Compressed Counting (CC)} was recently proposed for approximating the $\alpha$th frequency moments of data streams, for $0<\alpha \leq 2$. Under the relaxed strict-Turnstile model, CC dramatically improves the standard algorithm based on symmetric stable random projections}, especially as $\alpha\to 1$. A direct application of CC is to estimate the entropy, which is an important summary statistic in Web/network measurement and often serves a crucial "feature" for data mining. The R\'enyi entropy and the Tsallis entropy are functions of the $\alpha$th frequency moments; and both approach the Shannon entropy as $\alpha\to 1$. A recent theoretical work suggested using the $\alpha$th frequency moment to approximate the Shannon entropy with $\alpha=1+\delta$ and very small $|\delta|$ (e.g., $<10^{-4}$). In this study, we experiment using CC to estimate frequency moments, R\'enyi entropy, Tsallis entropy, and Shannon entropy, on real Web crawl data. We demonstrate the variance-bias trade-off in estimating Shannon entropy and provide practical recommendations. In particular, our experiments enable us to draw some important conclusions: (1) As $\alpha\to 1$, CC dramatically improves {\em symmetric stable random projections} in estimating frequency moments, R\'enyi entropy, Tsallis entropy, and Shannon entropy. The improvements appear to approach "infinity." (2) Using {\em symmetric stable random projections} and $\alpha = 1+\delta$ with very small $|\delta|$ does not provide a practical algorithm because the required sample size is enormous.
A Very Efficient Scheme for Estimating Entropy of Data Streams Using Compressed Counting
4,169
Estimating frequency moments of data streams is a very well studied problem and tight bounds are known on the amount of space that is necessary and sufficient when the stream is adversarially ordered. Recently, motivated by various practical considerations and applications in learning and statistics, there has been growing interest into studying streams that are randomly ordered. In the paper we improve the previous lower bounds on the space required to estimate the frequency moments of a randomly ordered streams.
Better Bounds for Frequency Moments in Random-Order Streams
4,170
In this paper, we present approximation algorithms for combinatorial optimization problems under probabilistic constraints. Specifically, we focus on stochastic variants of two important combinatorial optimization problems: the k-center problem and the set cover problem, with uncertainty characterized by a probability distribution over set of points or elements to be covered. We consider these problems under adaptive and non-adaptive settings, and present efficient approximation algorithms for the case when underlying distribution is a product distribution. In contrast to the expected cost model prevalent in stochastic optimization literature, our problem definitions support restrictions on the probability distributions of the total costs, via incorporating constraints that bound the probability with which the incurred costs may exceed a given threshold.
Stochastic Combinatorial Optimization under Probabilistic Constraints
4,171
The k-means method is a widely used clustering algorithm. One of its distinguished features is its speed in practice. Its worst-case running-time, however, is exponential, leaving a gap between practical and theoretical performance. Arthur and Vassilvitskii (FOCS 2006) aimed at closing this gap, and they proved a bound of $\poly(n^k, \sigma^{-1})$ on the smoothed running-time of the k-means method, where n is the number of data points and $\sigma$ is the standard deviation of the Gaussian perturbation. This bound, though better than the worst-case bound, is still much larger than the running-time observed in practice. We improve the smoothed analysis of the k-means method by showing two upper bounds on the expected running-time of k-means. First, we prove that the expected running-time is bounded by a polynomial in $n^{\sqrt k}$ and $\sigma^{-1}$. Second, we prove an upper bound of $k^{kd} \cdot \poly(n, \sigma^{-1})$, where d is the dimension of the data space. The polynomial is independent of k and d, and we obtain a polynomial bound for the expected running-time for $k, d \in O(\sqrt{\log n/\log \log n})$. Finally, we show that k-means runs in smoothed polynomial time for one-dimensional instances.
Improved Smoothed Analysis of the k-Means Method
4,172
Recent work has addressed the algorithmic problem of allocating advertisement space for keywords in sponsored search auctions so as to maximize revenue, most of which assume that pricing is done via a first-price auction. This does not realistically model the Generalized Second Price (GSP) auction used in practice, in which bidders pay the next-highest bid for keywords that they are allocated. Towards the goal of more realistically modeling these auctions, we introduce the Second-Price Ad Auctions problem, in which bidders' payments are determined by the GSP mechanism. We show that the complexity of the Second-Price Ad Auctions problem is quite different than that of the more studied First-Price Ad Auctions problem. First, unlike the first-price variant, for which small constant-factor approximations are known, it is NP-hard to approximate the Second-Price Ad Auctions problem to any non-trivial factor, even when the bids are small compared to the budgets. Second, this discrepancy extends even to the 0-1 special case that we call the Second-Price Matching problem (2PM). Offline 2PM is APX-hard, and for online 2PM there is no deterministic algorithm achieving a non-trivial competitive ratio and no randomized algorithm achieving a competitive ratio better than 2. This contrasts with the results for the analogous special case in the first-price model, the standard bipartite matching problem, which is solvable in polynomial time and which has deterministic and randomized online algorithms achieving better competitive ratios. On the positive side, we provide a 2-approximation for offline 2PM and a 5.083-competitive randomized algorithm for online 2PM. The latter result makes use of a new generalization of a result on the performance of the "Ranking" algorithm for online bipartite matching.
Thinking Twice about Second-Price Ad Auctions
4,173
We present fast algorithms for constructing probabilistic embeddings and approximate distance oracles in sparse graphs. The main ingredient is a fast algorithm for sampling the probabilistic partitions of Calinescu, Karloff, and Rabani in sparse graphs.
Fast C-K-R Partitions of Sparse Graphs
4,174
One of the most fundamental problems in large scale network analysis is to determine the importance of a particular node in a network. Betweenness centrality is the most widely used metric to measure the importance of a node in a network. In this paper, we present a randomized parallel algorithm and an algebraic method for computing betweenness centrality of all nodes in a network. We prove that any path-comparison based algorithm cannot compute betweenness in less than O(nm) time.
Betweenness Centrality : Algorithms and Lower Bounds
4,175
In this work, we obtain the following new results. 1. Given a sequence $D=((h_1,s_1), (h_2,s_2) ..., (h_n,s_n))$ of number pairs, where $s_i>0$ for all $i$, and a number $L_h$, we propose an O(n)-time algorithm for finding an index interval $[i,j]$ that maximizes $\frac{\sum_{k=i}^{j} h_k}{\sum_{k=i}^{j} s_k}$ subject to $\sum_{k=i}^{j} h_k \geq L_h$. 2. Given a sequence $D=((h_1,s_1), (h_2,s_2) ..., (h_n,s_n))$ of number pairs, where $s_i=1$ for all $i$, and an integer $L_s$ with $1\leq L_s\leq n$, we propose an $O(n\frac{T(L_s^{1/2})}{L_s^{1/2}})$-time algorithm for finding an index interval $[i,j]$ that maximizes $\frac{\sum_{k=i}^{j} h_k}{\sqrt{\sum_{k=i}^{j} s_k}}$ subject to $\sum_{k=i}^{j} s_k \geq L_s$, where $T(n')$ is the time required to solve the all-pairs shortest paths problem on a graph of $n'$ nodes. By the latest result of Chan \cite{Chan}, $T(n')=O(n'^3 \frac{(\log\log n')^3}{(\log n')^2})$, so our algorithm runs in subquadratic time $O(nL_s\frac{(\log\log L_s)^3}{(\log L_s)^2})$.
Algorithms for Locating Constrained Optimal Intervals
4,176
We study local search algorithms for metric instances of facility location problems: the uncapacitated facility location problem (UFL), as well as uncapacitated versions of the $k$-median, $k$-center and $k$-means problems. All these problems admit natural local search heuristics: for example, in the UFL problem the natural moves are to open a new facility, close an existing facility, and to swap a closed facility for an open one; in $k$-medians, we are allowed only swap moves. The local-search algorithm for $k$-median was analyzed by Arya et al. (SIAM J. Comput. 33(3):544-562, 2004), who used a clever ``coupling'' argument to show that local optima had cost at most constant times the global optimum. They also used this argument to show that the local search algorithm for UFL was 3-approximation; their techniques have since been applied to other facility location problems. In this paper, we give a proof of the $k$-median result which avoids this coupling argument. These arguments can be used in other settings where the Arya et al. arguments have been used. We also show that for the problem of opening $k$ facilities $F$ to minimize the objective function $\Phi_p(F) = \big(\sum_{j \in V} d(j, F)^p\big)^{1/p}$, the natural swap-based local-search algorithm is a $\Theta(p)$-approximation. This implies constant-factor approximations for $k$-medians (when $p=1$), and $k$-means (when $p = 2$), and an $O(\log n)$-approximation algorithm for the $k$-center problem (which is essentially $p = \log n$).
Simpler Analyses of Local Search Algorithms for Facility Location
4,177
We present an algorithm for the Single Source Shortest Paths (SSSP) problem in \emph{$H$-minor free} graphs. For every fixed $H$, if $G$ is a graph with $n$ vertices having integer edge lengths and $s$ is a designated source vertex of $G$, the algorithm runs in $\tilde{O}(n^{\sqrt{11.5}-2} \log L) \le O(n^{1.392} \log L)$ time, where $L$ is the absolute value of the smallest edge length. The algorithm computes shortest paths and the distances from $s$ to all vertices of the graph, or else provides a certificate that $G$ is not $H$-minor free. Our result improves an earlier $O(n^{1.5} \log L)$ time algorithm for this problem, which follows from a general SSSP algorithm of Goldberg.
Single source shortest paths in $H$-minor free graphs
4,178
In this paper we present several algorithmic techniques for inferring the structure of a company when only a limited amount of information is available. We consider problems with two types of inputs: the number of pairs of employees with a given property and restricted information about the hierarchical structure of the company. We provide dynamic programming and greedy algorithms for these problems.
Inferring Company Structure from Limited Available Information
4,179
In this paper we consider several facility location problems with applications to cost and social welfare optimization, when the area map is encoded as a binary (0,1) mxn matrix. We present algorithmic solutions for all the problems. Some cases are too particular to be used in practical situations, but they are at least a starting point for more generic solutions.
Locating Restricted Facilities on Binary Maps
4,180
A string matching -- and more generally, sequence matching -- algorithm is presented that has a linear worst-case computing time bound, a low worst-case bound on the number of comparisons (2n), and sublinear average-case behavior that is better than that of the fastest versions of the Boyer-Moore algorithm. The algorithm retains its efficiency advantages in a wide variety of sequence matching problems of practical interest, including traditional string matching; large-alphabet problems (as in Unicode strings); and small-alphabet, long-pattern problems (as in DNA searches). Since it is expressed as a generic algorithm for searching in sequences over an arbitrary type T, it is well suited for use in generic software libraries such as the C++ Standard Template Library. The algorithm was obtained by adding to the Knuth-Morris-Pratt algorithm one of the pattern-shifting techniques from the Boyer-Moore algorithm, with provision for use of hashing in this technique. In situations in which a hash function or random access to the sequences is not available, the algorithm falls back to an optimized version of the Knuth-Morris-Pratt algorithm.
A Fast Generic Sequence Matching Algorithm
4,181
In the budgeted learning problem, we are allowed to experiment on a set of alternatives (given a fixed experimentation budget) with the goal of picking a single alternative with the largest possible expected payoff. Approximation algorithms for this problem were developed by Guha and Munagala by rounding a linear program that couples the various alternatives together. In this paper we present an index for this problem, which we call the ratio index, which also guarantees a constant factor approximation. Index-based policies have the advantage that a single number (i.e. the index) can be computed for each alternative irrespective of all other alternatives, and the alternative with the highest index is experimented upon. This is analogous to the famous Gittins index for the discounted multi-armed bandit problem. The ratio index has several interesting structural properties. First, we show that it can be computed in strongly polynomial time. Second, we show that with the appropriate discount factor, the Gittins index and our ratio index are constant factor approximations of each other, and hence the Gittins index also gives a constant factor approximation to the budgeted learning problem. Finally, we show that the ratio index can be used to create an index-based policy that achieves an O(1)-approximation for the finite horizon version of the multi-armed bandit problem. Moreover, the policy does not require any knowledge of the horizon (whereas we compare its performance against an optimal strategy that is aware of the horizon). This yields the following surprising result: there is an index-based policy that achieves an O(1)-approximation for the multi-armed bandit problem, oblivious to the underlying discount factor.
The Ratio Index for Budgeted Learning, with Applications
4,182
We consider the following "multiway cut packing" problem in undirected graphs: we are given a graph G=(V,E) and k commodities, each corresponding to a set of terminals located at different vertices in the graph; our goal is to produce a collection of cuts {E_1,...,E_k} such that E_i is a multiway cut for commodity i and the maximum load on any edge is minimized. The load on an edge is defined to be the number of cuts in the solution crossing the edge. In the capacitated version of the problem the goal is to minimize the maximum relative load on any edge--the ratio of the edge's load to its capacity. Multiway cut packing arises in the context of graph labeling problems where we are given a partial labeling of a set of items and a neighborhood structure over them, and, informally, the goal is to complete the labeling in the most consistent way. This problem was introduced by Rabani, Schulman, and Swamy (SODA'08), who developed an O(log n/log log n) approximation for it in general graphs, as well as an improved O(log^2 k) approximation in trees. Here n is the number of nodes in the graph. We present the first constant factor approximation for this problem in arbitrary undirected graphs. Our approach is based on the observation that every instance of the problem admits a near-optimal laminar solution (that is, one in which no pair of cuts cross each other).
Packing multiway cuts in capacitated graphs
4,183
An L(2,1)-labeling of a graph $G$ is an assignment $f$ from the vertex set $V(G)$ to the set of nonnegative integers such that $|f(x)-f(y)|\ge 2$ if $x$ and $y$ are adjacent and $|f(x)-f(y)|\ge 1$ if $x$ and $y$ are at distance 2, for all $x$ and $y$ in $V(G)$. A $k$-L(2,1)-labeling is an assignment $f:V(G)\to\{0,..., k\}$, and the L(2,1)-labeling problem asks the minimum $k$, which we denote by $\lambda(G)$, among all possible assignments. It is known that this problem is NP-hard even for graphs of treewidth 2, and tree is one of a very few classes for which the problem is polynomially solvable. The running time of the best known algorithm for trees had been $\mO(\Delta^{4.5} n)$ for more than a decade, however, an $\mO(n^{1.75})$-time algorithm has been proposed recently, which substantially improved the previous one, where $\Delta$ is the maximum degree of $T$ and $n=|V(T)|$. In this paper, we finally establish a linear time algorithm for L(2,1)-labeling of trees.
A linear time algorithm for L(2,1)-labeling of trees
4,184
Single node failures represent more than 85% of all node failures in the today's large communication networks such as the Internet. Also, these node failures are usually transient. Consequently, having the routing paths globally recomputed does not pay off since the failed nodes recover fairly quickly, and the recomputed routing paths need to be discarded. Instead, we develop algorithms and protocols for dealing with such transient single node failures by suppressing the failure (instead of advertising it across the network), and routing messages to the destination via alternate paths that do not use the failed node. We compare our solution to that of Ref. [11] wherein the authors have presented a "Failure Insensitive Routing" protocol as a proactive recovery scheme for handling transient node failures. We show that our algorithms are faster by an order of magnitude while our paths are equally good. We show via simulation results that our paths are usually within 15% of the optimal for randomly generated graph with 100-1000 nodes.
Efficient Algorithms and Routing Protocols for Handling Transient Single Node Failures
4,185
The Lovasz Local Lemma [EL75] is a powerful tool to prove the existence of combinatorial objects meeting a prescribed collection of criteria. The technique can directly be applied to the satisfiability problem, yielding that a k-CNF formula in which each clause has common variables with at most 2^(k-2) other clauses is always satisfiable. All hitherto known proofs of the Local Lemma are non-constructive and do thus not provide a recipe as to how a satisfying assignment to such a formula can be efficiently found. In his breakthrough paper [Bec91], Beck demonstrated that if the neighbourhood of each clause be restricted to O(2^(k/48)), a polynomial time algorithm for the search problem exists. Alon simplified and randomized his procedure and improved the bound to O(2^(k/8)) [Alo91]. Srinivasan presented in [Sri08] a variant that achieves a bound of essentially O(2^(k/4)). In [Mos08], we improved this to O(2^(k/2)). In the present paper, we give a randomized algorithm that finds a satisfying assignment to every k-CNF formula in which each clause has a neighbourhood of at most the asymptotic optimum of 2^(k-5)-1 other clauses and that runs in expected time polynomial in the size of the formula, irrespective of k. If k is considered a constant, we can also give a deterministic variant. In contrast to all previous approaches, our analysis does not anymore invoke the standard non-constructive versions of the Local Lemma and can therefore be considered an alternative, constructive proof of it.
A constructive proof of the Lovasz Local Lemma
4,186
We study optimization problems that are neither approximable in polynomial time (at least with a constant factor) nor fixed parameter tractable, under widely believed complexity assumptions. Specifically, we focus on Maximum Independent Set, Vertex Coloring, Set Cover, and Bandwidth. In recent years, many researchers design exact exponential-time algorithms for these and other hard problems. The goal is getting the time complexity still of order $O(c^n)$, but with the constant $c$ as small as possible. In this work we extend this line of research and we investigate whether the constant $c$ can be made even smaller when one allows constant factor approximation. In fact, we describe a kind of approximation schemes -- trade-offs between approximation factor and the time complexity. We study two natural approaches. The first approach consists of designing a backtracking algorithm with a small search tree. We present one result of that kind: a $(4r-1)$-approximation of Bandwidth in time $O^*(2^{n/r})$, for any positive integer $r$. The second approach uses general transformations from exponential-time exact algorithms to approximations that are faster but still exponential-time. For example, we show that for any reduction rate $r$, one can transform any $O^*(c^n)$-time algorithm for Set Cover into a $(1+\ln r)$-approximation algorithm running in time $O^*(c^{n/r})$. We believe that results of that kind extend the applicability of exact algorithms for NP-hard problems.
Exponential-Time Approximation of Hard Problems
4,187
We study the worst-case communication complexity of distributed algorithms computing a path problem based on stationary distributions of random walks in a network $G$ with the caveat that $G$ is also the communication network. The problem is a natural generalization of shortest path lengths to expected path lengths, and represents a model used in many practical applications such as pagerank and eigentrust as well as other problems involving Markov chains defined by networks. For the problem of computing a single stationary probability, we prove an $\Omega(n^2 \log n)$ bits lower bound; the trivial centralized algorithm costs $O(n^3)$ bits and no known algorithm beats this. We also prove lower bounds for the related problems of approximately computing the stationary probabilities, computing only the ranking of the nodes, and computing the node with maximal rank. As a corollary, we obtain lower bounds for labelling schemes for the hitting time between two nodes.
Lower bounds for distributed markov chain problems
4,188
We give a simple algorithm for decremental graph connectivity that handles edge deletions in worst-case time $O(k \log n)$ and connectivity queries in $O(\log k)$, where $k$ is the number of edges deleted so far, and uses worst-case space $O(m^2)$. We use this to give an algorithm for $k$-edge witness (``does the removal of a given set of $k$ edges disconnect two vertices $u,v$?'') with worst-case time $O(k^2 \log n)$ and space $O(k^2 n^2)$. For $k = o(\sqrt{n})$ these improve the worst-case $O(\sqrt{n})$ bound for deletion due to Eppstein et al. We also give a decremental connectivity algorithm using $O(n^2 \log n / \log \log n)$ space, whose time complexity depends on the toughness and independence number of the input graph. Finally, we show how to construct a distributed data structure for \kvw by giving a labeling scheme. This is the first data structure for \kvw that can efficiently distributed without just giving each vertex a copy of the whole structure. Its complexity depends on being able to construct a linear layout with good properties.
Worst-case time decremental connectivity and k-edge witness
4,189
We consider the problem of partial order production: arrange the elements of an unknown totally ordered set T into a target partially ordered set S, by comparing a minimum number of pairs in T. Special cases include sorting by comparisons, selection, multiple selection, and heap construction. We give an algorithm performing ITLB + o(ITLB) + O(n) comparisons in the worst case. Here, n denotes the size of the ground sets, and ITLB denotes a natural information-theoretic lower bound on the number of comparisons needed to produce the target partial order. Our approach is to replace the target partial order by a weak order (that is, a partial order with a layered structure) extending it, without increasing the information theoretic lower bound too much. We then solve the problem by applying an efficient multiple selection algorithm. The overall complexity of our algorithm is polynomial. This answers a question of Yao (SIAM J. Comput. 18, 1989). We base our analysis on the entropy of the target partial order, a quantity that can be efficiently computed and provides a good estimate of the information-theoretic lower bound.
An Efficient Algorithm for Partial Order Production
4,190
Hash tables are one of the most fundamental data structures in computer science, in both theory and practice. They are especially useful in external memory, where their query performance approaches the ideal cost of just one disk access. Knuth gave an elegant analysis showing that with some simple collision resolution strategies such as linear probing or chaining, the expected average number of disk I/Os of a lookup is merely $1+1/2^{\Omega(b)}$, where each I/O can read a disk block containing $b$ items. Inserting a new item into the hash table also costs $1+1/2^{\Omega(b)}$ I/Os, which is again almost the best one can do if the hash table is entirely stored on disk. However, this assumption is unrealistic since any algorithm operating on an external hash table must have some internal memory (at least $\Omega(1)$ blocks) to work with. The availability of a small internal memory buffer can dramatically reduce the amortized insertion cost to $o(1)$ I/Os for many external memory data structures. In this paper we study the inherent query-insertion tradeoff of external hash tables in the presence of a memory buffer. In particular, we show that for any constant $c>1$, if the query cost is targeted at $1+O(1/b^{c})$ I/Os, then it is not possible to support insertions in less than $1-O(1/b^{\frac{c-1}{4}})$ I/Os amortized, which means that the memory buffer is essentially useless. While if the query cost is relaxed to $1+O(1/b^{c})$ I/Os for any constant $c<1$, there is a simple dynamic hash table with $o(1)$ insertion cost. These results also answer the open question recently posed by Jensen and Pagh.
Dynamic External Hashing: The Limit of Buffering
4,191
Sorting is a common and ubiquitous activity for computers. It is not surprising that there exist a plethora of sorting algorithms. For all the sorting algorithms, it is an accepted performance limit that sorting algorithms are linearithmic or O(N lg N). The linearithmic lower bound in performance stems from the fact that the sorting algorithms use the ordering property of the data. The sorting algorithm uses comparison by the ordering property to arrange the data elements from an initial permutation into a sorted permutation. Linear O(N) sorting algorithms exist, but use a priori knowledge of the data to use a specific property of the data and thus have greater performance. In contrast, the linearithmic sorting algorithms are generalized by using a universal property of data-comparison, but have a linearithmic performance lower bound. The trade-off in sorting algorithms is generality for performance by the chosen property used to sort the data elements. A general-purpose, linear sorting algorithm in the context of the trade-off of performance for generality at first consideration seems implausible. But, there is an implicit assumption that only the ordering property is universal. But, as will be discussed and examined, it is not the only universal property for data elements. The binar sort is a general-purpose sorting algorithm that uses this other universal property to sort linearly.
Binar Sort: A Linear Generalized Sorting Algorithm
4,192
Frequently, randomly organized data is needed to avoid an anomalous operation of other algorithms and computational processes. An analogy is that a deck of cards is ordered within the pack, but before a game of poker or solitaire the deck is shuffled to create a random permutation. Shuffling is used to assure that an aggregate of data elements for a sequence S is randomly arranged, but avoids an ordered or partially ordered permutation. Shuffling is the process of arranging data elements into a random permutation. The sequence S as an aggregation of N data elements, there are N! possible permutations. For the large number of possible permutations, two of the possible permutations are for a sorted or ordered placement of data elements--both an ascending and descending sorted permutation. Shuffling must avoid inadvertently creating either an ascending or descending permutation. Shuffling is frequently coupled to another algorithmic function -- pseudo-random number generation. The efficiency and quality of the shuffle is directly dependent upon the random number generation algorithm utilized. A more effective and efficient method of shuffling is to use parameterization to configure the shuffle, and to shuffle into sub-arrays by utilizing the encoding of the data elements. The binar shuffle algorithm uses the encoding of the data elements and parameterization to avoid any direct coupling to a random number generation algorithm, but still remain a linear O(N) shuffle algorithm.
Binar Shuffle Algorithm: Shuffling Bit by Bit
4,193
We study the classical approximate string matching problem, that is, given strings $P$ and $Q$ and an error threshold $k$, find all ending positions of substrings of $Q$ whose edit distance to $P$ is at most $k$. Let $P$ and $Q$ have lengths $m$ and $n$, respectively. On a standard unit-cost word RAM with word size $w \geq \log n$ we present an algorithm using time $$ O(nk \cdot \min(\frac{\log^2 m}{\log n},\frac{\log^2 m\log w}{w}) + n) $$ When $P$ is short, namely, $m = 2^{o(\sqrt{\log n})}$ or $m = 2^{o(\sqrt{w/\log w})}$ this improves the previously best known time bounds for the problem. The result is achieved using a novel implementation of the Landau-Vishkin algorithm based on tabulation and word-level parallelism.
Faster Approximate String Matching for Short Patterns
4,194
In this paper we study the adaptive prefix coding problem in cases where the size of the input alphabet is large. We present an online prefix coding algorithm that uses $O(\sigma^{1 / \lambda + \epsilon}) $ bits of space for any constants $\eps>0$, $\lambda>1$, and encodes the string of symbols in $O(\log \log \sigma)$ time per symbol \emph{in the worst case}, where $\sigma$ is the size of the alphabet. The upper bound on the encoding length is $\lambda n H (s) +(\lambda \ln 2 + 2 + \epsilon) n + O (\sigma^{1 / \lambda} \log^2 \sigma)$ bits.
Low-Memory Adaptive Prefix Coding
4,195
A {\em local graph partitioning algorithm} finds a set of vertices with small conductance (i.e. a sparse cut) by adaptively exploring part of a large graph $G$, starting from a specified vertex. For the algorithm to be local, its complexity must be bounded in terms of the size of the set that it outputs, with at most a weak dependence on the number $n$ of vertices in $G$. Previous local partitioning algorithms find sparse cuts using random walks and personalized PageRank. In this paper, we introduce a randomized local partitioning algorithm that finds a sparse cut by simulating the {\em volume-biased evolving set process}, which is a Markov chain on sets of vertices. We prove that for any set of vertices $A$ that has conductance at most $\phi$, for at least half of the starting vertices in $A$ our algorithm will output (with probability at least half), a set of conductance $O(\phi^{1/2} \log^{1/2} n)$. We prove that for a given run of the algorithm, the expected ratio between its computational complexity and the volume of the set that it outputs is $O(\phi^{-1/2} polylog(n))$. In comparison, the best previous local partitioning algorithm, due to Andersen, Chung, and Lang, has the same approximation guarantee, but a larger ratio of $O(\phi^{-1} polylog(n))$ between the complexity and output volume. Using our local partitioning algorithm as a subroutine, we construct a fast algorithm for finding balanced cuts. Given a fixed value of $\phi$, the resulting algorithm has complexity $O((m+n\phi^{-1/2}) polylog(n))$ and returns a cut with conductance $O(\phi^{1/2} \log^{1/2} n)$ and volume at least $v_{\phi}/2$, where $v_{\phi}$ is the largest volume of any set with conductance at most $\phi$.
Finding Sparse Cuts Locally Using Evolving Sets
4,196
Within a mathematically rigorous model, we analyse the curse of dimensionality for deterministic exact similarity search in the context of popular indexing schemes: metric trees. The datasets $X$ are sampled randomly from a domain $\Omega$, equipped with a distance, $\rho$, and an underlying probability distribution, $\mu$. While performing an asymptotic analysis, we send the intrinsic dimension $d$ of $\Omega$ to infinity, and assume that the size of a dataset, $n$, grows superpolynomially yet subexponentially in $d$. Exact similarity search refers to finding the nearest neighbour in the dataset $X$ to a query point $\omega\in\Omega$, where the query points are subject to the same probability distribution $\mu$ as datapoints. Let $\mathscr F$ denote a class of all 1-Lipschitz functions on $\Omega$ that can be used as decision functions in constructing a hierarchical metric tree indexing scheme. Suppose the VC dimension of the class of all sets $\{\omega\colon f(\omega)\geq a\}$, $a\in\R$ is $o(n^{1/4}/\log^2n)$. (In view of a 1995 result of Goldberg and Jerrum, even a stronger complexity assumption $d^{O(1)}$ is reasonable.) We deduce the $\Omega(n^{1/4})$ lower bound on the expected average case performance of hierarchical metric-tree based indexing schemes for exact similarity search in $(\Omega,X)$. In paricular, this bound is superpolynomial in $d$.
Lower Bounds on Performance of Metric Tree Indexing Schemes for Exact Similarity Search in High Dimensions
4,197
We consider the the problem of tracking heavy hitters and quantiles in the distributed streaming model. The heavy hitters and quantiles are two important statistics for characterizing a data distribution. Let $A$ be a multiset of elements, drawn from the universe $U=\{1,...,u\}$. For a given $0 \le \phi \le 1$, the $\phi$-heavy hitters are those elements of $A$ whose frequency in $A$ is at least $\phi |A|$; the $\phi$-quantile of $A$ is an element $x$ of $U$ such that at most $\phi|A|$ elements of $A$ are smaller than $A$ and at most $(1-\phi)|A|$ elements of $A$ are greater than $x$. Suppose the elements of $A$ are received at $k$ remote {\em sites} over time, and each of the sites has a two-way communication channel to a designated {\em coordinator}, whose goal is to track the set of $\phi$-heavy hitters and the $\phi$-quantile of $A$ approximately at all times with minimum communication. We give tracking algorithms with worst-case communication cost $O(k/\eps \cdot \log n)$ for both problems, where $n$ is the total number of items in $A$, and $\eps$ is the approximation error. This substantially improves upon the previous known algorithms. We also give matching lower bounds on the communication costs for both problems, showing that our algorithms are optimal. We also consider a more general version of the problem where we simultaneously track the $\phi$-quantiles for all $0 \le \phi \le 1$.
Optimal Tracking of Distributed Heavy Hitters and Quantiles
4,198
In several applications such as databases, planning, and sensor networks, parameters such as selectivity, load, or sensed values are known only with some associated uncertainty. The performance of such a system (as captured by some objective function over the parameters) is significantly improved if some of these parameters can be probed or observed. In a resource constrained situation, deciding which parameters to observe in order to optimize system performance itself becomes an interesting and important optimization problem. This general problem is the focus of this paper. One of the most important considerations in this framework is whether adaptivity is required for the observations. Adaptive observations introduce blocking or sequential operations in the system whereas non-adaptive observations can be performed in parallel. One of the important questions in this regard is to characterize the benefit of adaptivity for probes and observation. We present general techniques for designing constant factor approximations to the optimal observation schemes for several widely used scheduling and metric objective functions. We show a unifying technique that relates this optimization problem to the outlier version of the corresponding deterministic optimization. By making this connection, our technique shows constant factor upper bounds for the benefit of adaptivity of the observation schemes. We show that while probing yields significant improvement in the objective function, being adaptive about the probing is not beneficial beyond constant factors.
Adaptive Uncertainty Resolution in Bayesian Combinatorial Optimization Problems
4,199