text stringlengths 17 3.36M | source stringlengths 3 333 | __index_level_0__ int64 0 518k |
|---|---|---|
In this paper, we consider the task of answering linear queries under the constraint of differential privacy. This is a general and well-studied class of queries that captures other commonly studied classes, including predicate queries and histogram queries. We show that the accuracy to which a set of linear queries can be answered is closely related to its fat-shattering dimension, a property that characterizes the learnability of real-valued functions in the agnostic-learning setting. | Differential Privacy and the Fat-Shattering Dimension of Linear Queries | 4,400 |
We consider the problem of finding \textit{semi-matching} in bipartite graphs which is also extensively studied under various names in the scheduling literature. We give faster algorithms for both weighted and unweighted case. For the weighted case, we give an $O(nm\log n)$-time algorithm, where $n$ is the number of vertices and $m$ is the number of edges, by exploiting the geometric structure of the problem. This improves the classical $O(n^3)$ algorithms by Horn [Operations Research 1973] and Bruno, Coffman and Sethi [Communications of the ACM 1974]. For the unweighted case, the bound could be improved even further. We give a simple divide-and-conquer algorithm which runs in $O(\sqrt{n}m\log n)$ time, improving two previous $O(nm)$-time algorithms by Abraham [MSc thesis, University of Glasgow 2003] and Harvey, Ladner, Lov\'asz and Tamir [WADS 2003 and Journal of Algorithms 2006]. We also extend this algorithm to solve the \textit{Balance Edge Cover} problem in $O(\sqrt{n}m\log n)$ time, improving the previous $O(nm)$-time algorithm by Harada, Ono, Sadakane and Yamashita [ISAAC 2008]. | Faster Algorithms for Semi-Matching Problems | 4,401 |
By creating some new concepts and methods: checking tree, long unit path, direct contradiction unit pair, indirect contradiction unit pair, additional contradiction unit pair, 2-unit layer and 3-unit layer, redundant units, and destroying parallel pairs , we successfully transform solving a 3SAT problem to solving 2SAT problems in polynomial time. Thus we proved that NP=P. | A Polynomial time Algorithm for 3SAT | 4,402 |
We present a multi-level graph partitioning algorithm based on the extreme idea to contract only a single edge on each level of the hierarchy. This obviates the need for a matching algorithm and promises very good partitioning quality since there are very few changes between two levels. Using an efficient data structure and new flexible ways to break local search improvements early, we obtain an algorithm that scales to large inputs and produces the best known partitioning results for many inputs. For example, in Walshaw's well known benchmark tables we achieve 155 improvements dominating the entries for large graphs. | n-Level Graph Partitioning | 4,403 |
We give efficient algorithms for volume sampling, i.e., for picking $k$-subsets of the rows of any given matrix with probabilities proportional to the squared volumes of the simplices defined by them and the origin (or the squared volumes of the parallelepipeds defined by these subsets of rows). This solves an open problem from the monograph on spectral algorithms by Kannan and Vempala. Our first algorithm for volume sampling $k$-subsets of rows from an $m$-by-$n$ matrix runs in $O(kmn^{\omega} \log n)$ arithmetic operations and a second variant of it for $(1+\epsilon)$-approximate volume sampling runs in $O(mn \log m \cdot k^{2}/\epsilon^{2} + m \log^{\omega} m \cdot k^{2\omega+1}/\epsilon^{2\omega} \cdot \log(k \epsilon^{-1} \log m))$ arithmetic operations, which is almost linear in the size of the input (i.e., the number of entries) for small $k$. Our efficient volume sampling algorithms imply several interesting results for low-rank matrix approximation. | Efficient volume sampling for row/column subset selection | 4,404 |
Given a weighted graph $G$ and an error parameter $\epsilon > 0$, the {\em graph sparsification} problem requires sampling edges in $G$ and giving the sampled edges appropriate weights to obtain a sparse graph $G_{\epsilon}$ (containing O(n\log n) edges in expectation) with the following property: the weight of every cut in $G_{\epsilon}$ is within a factor of $(1\pm \epsilon)$ of the weight of the corresponding cut in $G$. We provide a generic framework that sets out sufficient conditions for any particular sampling scheme to result in good sparsifiers, and obtain a set of results by simple instantiations of this framework. The results we obtain include the following: (1) We improve the time complexity of graph sparsification from O(m\log^3 n) to O(m + n\log^4 n) for graphs with polynomial edge weights. (2) We improve the time complexity of graph sparsification from O(m\log^3 n) to O(m\log^2 n) for graphs with arbitrary edge weights. (3) If the size of the sparsifier is allowed to be O(n\log^2 n/\epsilon^2) instead of O(n\log n/\epsilon^2), we improve the time complexity of sparsification to O(m) for graphs with polynomial edge weights. (4) We show that sampling using standard connectivities results in good sparsifiers, thus resolving an open question of Benczur and Karger. As a corollary, we give a simple proof of (a slightly weaker version of) a result due to Spielman and Srivastava showing that sampling using effective resistances produces good sparsifiers. (5) We give a simple proof showing that sampling using strong connectivities results in good sparsifiers, a result obtained previously using a more involved proof by Benczur and Karger. A key ingredient of our proofs is a generalization of bounds on the number of small cuts in an undirected graph due to Karger; this generalization might be of independent interest. | A General Framework for Graph Sparsification | 4,405 |
Dimension reduction is a key algorithmic tool with many applications including nearest-neighbor search, compressed sensing and linear algebra in the streaming model. In this work we obtain a {\em sparse} version of the fundamental tool in dimension reduction --- the Johnson--Lindenstrauss transform. Using hashing and local densification, we construct a sparse projection matrix with just $\tilde{O}(\frac{1}{\epsilon})$ non-zero entries per column. We also show a matching lower bound on the sparsity for a large class of projection matrices. Our bounds are somewhat surprising, given the known lower bounds of $\Omega(\frac{1}{\epsilon^2})$ both on the number of rows of any projection matrix and on the sparsity of projection matrices generated by natural constructions. Using this, we achieve an $\tilde{O}(\frac{1}{\epsilon})$ update time per non-zero element for a $(1\pm\epsilon)$-approximate projection, thereby substantially outperforming the $\tilde{O}(\frac{1}{\epsilon^2})$ update time required by prior approaches. A variant of our method offers the same guarantees for sparse vectors, yet its $\tilde{O}(d)$ worst case running time matches the best approach of Ailon and Liberty. | A Sparse Johnson--Lindenstrauss Transform | 4,406 |
We introduce optimal algorithms for the problems of data placement (DP) and page placement (PP) in networks with a constant number of clients each of which has limited storage availability and issues requests for data objects. The objective for both problems is to efficiently utilize each client's storage (deciding where to place replicas of objects) so that the total incurred access and installation cost over all clients is minimized. In the PP problem an extra constraint on the maximum number of clients served by a single client must be satisfied. Our algorithms solve both problems optimally when all objects have uniform lengths. When objects lengths are non-uniform we also find the optimal solution, albeit a small, asymptotically tight violation of each client's storage size by $\epsilon$lmax where lmax is the maximum length of the objects and $\epsilon$ some arbitrarily small positive constant. We make no assumption on the underlying topology of the network (metric, ultrametric etc.), thus obtaining the first non-trivial results for non-metric data placement problems. | Optimal Data Placement on Networks With Constant Number of Clients | 4,407 |
A graph G'(V,E') is an \eps-sparsification of G for some \eps>0, if every (weighted) cut in G' is within (1\pm \eps) of the corresponding cut in G. A celebrated result of Benczur and Karger shows that for every undirected graph G, an \eps-sparsification with O(n\log n/\e^2) edges can be constructed in O(m\log^2n) time. Applications to modern massive data sets often constrain algorithms to use computation models that restrict random access to the input. The semi-streaming model, in which the algorithm is constrained to use \tilde O(n) space, has been shown to be a good abstraction for analyzing graph algorithms in applications to large data sets. Recently, a semi-streaming algorithm for graph sparsification was presented by Anh and Guha; the total running time of their implementation is \Omega(mn), too large for applications where both space and time are important. In this paper, we introduce a new technique for graph sparsification, namely refinement sampling, that gives an \tilde{O}(m) time semi-streaming algorithm for graph sparsification. Specifically, we show that refinement sampling can be used to design a one-pass streaming algorithm for sparsification that takes O(\log\log n) time per edge, uses O(\log^2 n) space per node, and outputs an \eps-sparsifier with O(n\log^3 n/\eps^2) edges. At a slightly increased space and time complexity, we can reduce the sparsifier size to O(n \log n/\e^2) edges matching the Benczur-Karger result, while improving upon the Benczur-Karger runtime for m=\omega(n\log^3 n). Finally, we show that an \eps-sparsifier with O(n \log n/\eps^2) edges can be constructed in two passes over the data and O(m) time whenever m =\Omega(n^{1+\delta}) for some constant \delta>0. As a by-product of our approach, we also obtain an O(m\log\log n+n \log n) time streaming algorithm to compute a sparse k-connectivity certificate of a graph. | Graph Sparsification via Refinement Sampling | 4,408 |
One of the driving problems in the CSP area is the Dichotomy Conjecture, formulated in 1993 by Feder and Vardi [STOC'93], stating that for any fixed relational structure G the Constraint Satisfaction Problem CSP(G) is either NP--complete or polynomial time solvable. A large amount of research has gone into checking various specific cases of this conjecture. One such variant which attracted a lot of attention in the recent years is the LIST MATRIX PARTITION problem. In 2004 Cameron et al. [SODA'04] classified almost all LIST MATRIX PARTITION variants for matrices of size at most four. The only case which resisted the classification became known as the STUBBORN PROBLEM. In this paper we show a result which enables us to finish the classification - thus solving a problem which resisted attacks for the last six years. Our approach is based on a combinatorial problem known to be at least as hard as the STUBBORN PROBLEM - the 3-COMPATIBLE COLOURING problem. In this problem we are given a complete graph with each edge assigned one of 3 possible colours and we want to assign one of those 3 colours to each vertex in such a way that no edge has the same colour as both of its endpoints. The tractability of the 3-COMPATIBLE COLOURING problem has been open for several years and the best known algorithm prior to this paper is due to Feder et al. [SODA'05] - a quasipolynomial algorithm with a n^O(log n / log log n) time complexity. In this paper we present a polynomial-time algorithm for the 3-COMPATIBLE COLOURING problem and consequently we prove a dichotomy for the k-COMPATIBLE COLOURING problem. | The stubborn problem is stubborn no more (a polynomial algorithm for
3-compatible colouring and the stubborn list partition problem) | 4,409 |
In this paper we merge recent developments on exact algorithms for finding an ordering of vertices of a given graph that minimizes bandwidth (the BANDWIDTH problem) and for finding an embedding of a given graph into a line that minimizes distortion (the DISTORTION problem). For both problems we develop algorithms that work in O(9.363^n) time and polynomial space. For BANDWIDTH, this improves O^*(10^n) algorithm by Feige and Kilian from 2000, for DISTORTION this is the first polynomial space exact algorithm that works in O(c^n) time we are aware of. As a byproduct, we enhance the O(5^{n+o(n)})-time and O^*(2^n)-space algorithm for DISTORTION by Fomin et al. to an algorithm working in O(4.383^n) time and space. | Bandwidth and Distortion Revisited | 4,410 |
We present a fast multiscale approach for the network minimum logarithmic arrangement problem. This type of arrangement plays an important role in a network compression and fast node/link access operations. The algorithm is of linear complexity and exhibits good scalability which makes it practical and attractive for using on large-scale instances. Its effectiveness is demonstrated on a large set of real-life networks. These networks with corresponding best-known minimization results are suggested as an open benchmark for a research community to evaluate new methods for this problem. | Multiscale approach for the network compression-friendly ordering | 4,411 |
With the recent surge of social networks like Facebook, new forms of recommendations have become possible -- personalized recommendations of ads, content, and even new social and product connections based on one's social interactions. In this paper, we study whether "social recommendations", or recommendations that utilize a user's social network, can be made without disclosing sensitive links between users. More precisely, we quantify the loss in utility when existing recommendation algorithms are modified to satisfy a strong notion of privacy called differential privacy. We propose lower bounds on the minimum loss in utility for any recommendation algorithm that is differentially private. We also propose two recommendation algorithms that satisfy differential privacy, analyze their performance in comparison to the lower bound, both analytically and experimentally, and show that good private social recommendations are feasible only for a few users in the social network or for a lenient setting of privacy parameters. | On the (Im)possibility of Preserving Utility and Privacy in Personalized
Social Recommendations | 4,412 |
Given a directed acyclic graph with labeled vertices, we consider the problem of finding the most common label sequences ("traces") among all paths in the graph (of some maximum length m). Since the number of paths can be huge, we propose novel algorithms whose time complexity depends only on the size of the graph, and on the relative frequency epsilon of the most frequent traces. In addition, we apply techniques from streaming algorithms to achieve space usage that depends only on epsilon, and not on the number of distinct traces. The abstract problem considered models a variety of tasks concerning finding frequent patterns in event sequences. Our motivation comes from working with a data set of 2 million RFID readings from baggage trolleys at Copenhagen Airport. The question of finding frequent passenger movement patterns is mapped to the above problem. We report on experimental findings for this data set. | On Finding Frequent Patterns in Directed Acyclic Graphs | 4,413 |
We introduce the Deletable Bloom filter (DlBF) as a new spin on the popular data structure based on compactly encoding the information of where collisions happen when inserting elements. The DlBF design enables false-negative-free deletions at a fraction of the cost in memory consumption, which turns to be appealing for certain probabilistic filter applications. | The Deletable Bloom filter: A new member of the Bloom family | 4,414 |
We show a deterministic constant-time local algorithm for constructing an approximately maximum flow and minimum fractional cut in multisource-multitarget networks with bounded degrees and bounded edge capacities. Locality means that the decision we make about each edge only depends on its constant radius neighborhood. We show two applications of the algorithms: one is related to the Aldous-Lyons Conjecture, and the other is about approximating the neighborhood distribution of graphs by bounded-size graphs. The scope of our results can be extended to unimodular random graphs and networks. As a corollary, we generalize the Maximum Flow Minimum Cut Theorem to unimodular random flow networks. | Local algorithms for the maximum flow and minimum cut in bounded-degree
networks | 4,415 |
Given an undirected graph $G$ and an error parameter $\epsilon > 0$, the {\em graph sparsification} problem requires sampling edges in $G$ and giving the sampled edges appropriate weights to obtain a sparse graph $G_{\epsilon}$ with the following property: the weight of every cut in $G_{\epsilon}$ is within a factor of $(1\pm \epsilon)$ of the weight of the corresponding cut in $G$. If $G$ is unweighted, an $O(m\log n)$-time algorithm for constructing $G_{\epsilon}$ with $O(n\log n/\epsilon^2)$ edges in expectation, and an $O(m)$-time algorithm for constructing $G_{\epsilon}$ with $O(n\log^2 n/\epsilon^2)$ edges in expectation have recently been developed (Hariharan-Panigrahi, 2010). In this paper, we improve these results by giving an $O(m)$-time algorithm for constructing $G_{\epsilon}$ with $O(n\log n/\epsilon^2)$ edges in expectation, for unweighted graphs. Our algorithm is optimal in terms of its time complexity; further, no efficient algorithm is known for constructing a sparser $G_{\epsilon}$. Our algorithm is Monte-Carlo, i.e. it produces the correct output with high probability, as are all efficient graph sparsification algorithms. | A Linear-time Algorithm for Sparsification of Unweighted Graphs | 4,416 |
Estimating the first moment of a data stream defined as $F_1 = \sum_{i \in \{1, 2, \ldots, n\}} \abs{f_i}$ to within $1 \pm \epsilon$-relative error with high probability is a basic and influential problem in data stream processing. A tight space bound of $O(\epsilon^{-2} \log (mM))$ is known from the work of [Kane-Nelson-Woodruff-SODA10]. However, all known algorithms for this problem require per-update stream processing time of $\Omega(\epsilon^{-2})$, with the only exception being the algorithm of [Ganguly-Cormode-RANDOM07] that requires per-update processing time of $O(\log^2(mM)(\log n))$ albeit with sub-optimal space $O(\epsilon^{-3}\log^2(mM))$. In this paper, we present an algorithm for estimating $F_1$ that achieves near-optimality in both space and update processing time. The space requirement is $O(\epsilon^{-2}(\log n + (\log \epsilon^{-1})\log(mM)))$ and the per-update processing time is $O( (\log n)\log (\epsilon^{-1}))$. | On Estimating the First Frequency Moment of Data Streams | 4,417 |
The round-trip distance function on a geographic network (such as a road network, flight network, or utility distribution grid) defines the "distance" from a single vertex to a pair of vertices as the minimum length tour visiting all three vertices and ending at the starting vertex. Given a geographic network and a subset of its vertices called "sites" (for example a road network with a list of grocery stores), a two-site round-trip Voronoi diagram labels each vertex in the network with the pair of sites that minimizes the round-trip distance from that vertex. Alternatively, given a geographic network and two sets of sites of different types (for example grocery stores and coffee shops), a two-color round-trip Voronoi diagram labels each vertex with the pair of sites of different types minimizing the round-trip distance. In this paper, we prove several new properties of two-site and two-color round-trip Voronoi diagrams in a geographic network, including a relationship between the "doubling density" of sites and an upper bound on the number of non-empty Voronoi regions. We show how those lemmas can be used in new algorithms asymptotically more efficient than previous known algorithms when the networks have reasonable distribution properties related to doubling density, and we provide experimental data suggesting that road networks with standard point-of-interest sites have these properties. | Round-Trip Voronoi Diagrams and Doubling Density in Geographic Networks | 4,418 |
A data stream is viewed as a sequence of $M$ updates of the form $(\text{index},i,v)$ to an $n$-dimensional integer frequency vector $f$, where the update changes $f_i$ to $f_i + v$, and $v$ is an integer and assumed to be in $\{-m, ..., m\}$. The $p$th frequency moment $F_p$ is defined as $\sum_{i=1}^n \abs{f_i}^p$. We consider the problem of estimating $F_p$ to within a multiplicative approximation factor of $1\pm \epsilon$, for $p \in [0,2]$. Several estimators have been proposed for this problem, including Indyk's median estimator \cite{indy:focs00}, Li's geometric means estimator \cite{pinglib:2006}, an \Hss-based estimator \cite{gc:random07}. The first two estimators require space $\tilde{O}(\epsilon^{-2})$, where the $\tilde{O}$ notation hides polylogarithmic factors in $\epsilon^{-1}, m, n$ and $M$. Recently, Kane, Nelson and Woodruff in \cite{knw:soda10} present a space-optimal and novel estimator, called the log-cosine estimator. In this paper, we present an elementary analysis of the log-cosine estimator in a stand-alone setting. The analysis in \cite{knw:soda10} is more complicated. | Estimating small frequency moments of data stream: a characteristic
function approach | 4,419 |
In this paper we present a modification of a technique by Chiba and Nishizeki [Chiba and Nishizeki: Arboricity and Subgraph Listing Algorithms, SIAM J. Comput. 14(1), pp. 210--223 (1985)]. Based on it, we design a data structure suitable for dynamic graph algorithms. We employ the data structure to formulate new algorithms for several problems, including counting subgraphs of four vertices, recognition of diamond-free graphs, cop-win graphs and strongly chordal graphs, among others. We improve the time complexity for graphs with low arboricity or h-index. | Arboricity, h-Index, and Dynamic Algorithms | 4,420 |
The study of {\em balls-into-bins processes} or {\em occupancy problems} has a long history. These processes can be used to translate realistic problems into mathematical ones in a natural way. In general, the goal of a balls-into-bins process is to allocate a set of independent objects (tasks, jobs, balls) to a set of resources (servers, bins, urns) and, thereby, to minimize the maximum load. In this paper, we analyze the maximum load for the {\em chains-into-bins} problem, which is defined as follows. There are $n$ bins, and $m$ objects to be allocated. Each object consists of balls connected into a chain of length $\ell$, so that there are $m \ell$ balls in total. We assume the chains cannot be broken, and that the balls in one chain have to be allocated to $\ell$ consecutive bins. We allow each chain $d$ independent and uniformly random bin choices for its starting position. The chain is allocated using the rule that the maximum load of any bin receiving a ball of that chain is minimized. We show that, for $d \ge 2$ and $m\cdot\ell=O(n)$, the maximum load is $((\ln \ln m)/\ln d) +O(1)$ with probability $1-\tilde O(1/m^{d-1})$. | Chains-into-Bins Processes | 4,421 |
For many algorithmic problems, traditional algorithms that optimise on the number of instructions executed prove expensive on I/Os. Novel and very different design techniques, when applied to these problems, can produce algorithms that are I/O efficient. This thesis adds to the growing chorus of such results. The computational models we use are the external memory model and the W-Stream model. On the external memory model, we obtain the following results. (1) An I/O efficient algorithm for computing minimum spanning trees of graphs that improves on the performance of the best known algorithm. (2) The first external memory version of soft heap, an approximate meldable priority queue. (3) Hard heap, the first meldable external memory priority queue that matches the amortised I/O performance of the known external memory priority queues, while allowing a meld operation at the same amortised cost. (4) I/O efficient exact, approximate and randomised algorithms for the minimum cut problem, which has not been explored before on the external memory model. (5) Some lower and upper bounds on I/Os for interval graphs. On the W-Stream model, we obtain the following results. (1) Algorithms for various tree problems and list ranking that match the performance of the best known algorithms and are easier to implement than them. (2) Pass efficient algorithms for sorting, and the maximal independent set problems, that improve on the best known algorithms. (3) Pass efficient algorithms for the graphs problems of finding vertex-colouring, approximate single source shortest paths, maximal matching, and approximate weighted vertex cover. (4) Lower bounds on passes for list ranking and maximal matching. We propose two variants of the W-Stream model, and design algorithms for the maximal independent set, vertex-colouring, and planar graph single source shortest paths problems on those models. | Efficient Algorithms and Data Structures for Massive Data Sets | 4,422 |
We consider scheduling packets with values in a capacity-bounded buffer in an online setting. In this model, there is a buffer with limited capacity $B$. At any time, the buffer cannot accommodate more than $B$ packets. Packets arrive over time. Each packet is associated with a non-negative value. Packets leave the buffer only because they are either sent or dropped. Those packets that have left the buffer will not be reconsidered for delivery any more. In each time step, at most one packet in the buffer can be sent. The order in which the packets are sent should comply with the order of their arrival time. The objective is to maximize the total value of the packets sent in an online manner. In this paper, we study a variant of this FIFO buffering model in which a packet's value is either 1 or $\alpha > 1$. We present a deterministic memoryless 1.304-competitive algorithm. This algorithm has the same competitive ratio as the one presented in (Lotker and Patt-Shamir. PODC 2002, Computer Networks 2003). However, our algorithm is simpler and does not employ any marking bits. The idea used in our algorithm is novel and different from all previous approaches applied for the general model and its variants. We do not proactively preempt one packet when a new packet arrives. Instead, we may preempt more than one 1-value packet when the buffer contains sufficiently many $\alpha$-value packets. | A Better Memoryless Online Algorithm for FIFO Buffering Packets with Two
Values | 4,423 |
We present a near-linear time algorithm that approximates the edit distance between two strings within a polylogarithmic factor; specifically, for strings of length n and every fixed epsilon>0, it can compute a (log n)^O(1/epsilon) approximation in n^(1+epsilon) time. This is an exponential improvement over the previously known factor, 2^(O (sqrt(log n))), with a comparable running time (Ostrovsky and Rabani J.ACM 2007; Andoni and Onak STOC 2009). Previously, no efficient polylogarithmic approximation algorithm was known for any computational task involving edit distance (e.g., nearest neighbor search or sketching). This result arises naturally in the study of a new asymmetric query model. In this model, the input consists of two strings x and y, and an algorithm can access y in an unrestricted manner, while being charged for querying every symbol of x. Indeed, we obtain our main result by designing an algorithm that makes a small number of queries in this model. We then provide a nearly-matching lower bound on the number of queries. Our lower bound is the first to expose hardness of edit distance stemming from the input strings being "repetitive", which means that many of their substrings are approximately identical. Consequently, our lower bound provides the first rigorous separation between edit distance and Ulam distance, which is edit distance on non-repetitive strings, such as permutations. | Polylogarithmic Approximation for Edit Distance and the Asymmetric Query
Complexity | 4,424 |
Motivated by providing quality-of-service differentiated services in the Internet, we consider buffer management algorithms for network switches. We study a multi-buffer model. A network switch consists of multiple size-bounded buffers such that at any time, the number of packets residing in each individual buffer cannot exceed its capacity. Packets arrive at the network switch over time; they have values, deadlines, and designated buffers. In each time step, at most one pending packet is allowed to be sent and this packet can be from any buffer. The objective is to maximize the total value of the packets sent by their respective deadlines. A 9.82-competitive online algorithm has been provided for this model (Azar and Levy. SWAT 2006), but no offline algorithms have been known yet. In this paper, We study the offline setting of the multi-buffer model. Our contributions include a few optimal offline algorithms for some variants of the model. Each variant has its unique and interesting algorithmic feature. These offline algorithms help us understand the model better in designing online algorithms. | Scheduling Packets with Values and Deadlines in Size-bounded Buffers | 4,425 |
The rank and select operations over a string of length n from an alphabet of size $\sigma$ have been used widely in the design of succinct data structures. In many applications, the string itself need be maintained dynamically, allowing characters of the string to be inserted and deleted. Under the word RAM model with word size $w=\Omega(\lg n)$, we design a succinct representation of dynamic strings using $nH_0 + o(n)\lg\sigma + O(w)$ bits to support rank, select, insert and delete in $O(\frac{\lg n}{\lg\lg n}(\frac{\lg \sigma}{\lg\lg n}+1))$ time. When the alphabet size is small, i.e. when $\sigma = O(\polylog (n))$, including the case in which the string is a bit vector, these operations are supported in $O(\frac{\lg n}{\lg\lg n})$ time. Our data structures are more efficient than previous results on the same problem, and we have applied them to improve results on the design and construction of space-efficient text indexes. | Succinct Representations of Dynamic Strings | 4,426 |
The problems of random projections and sparse reconstruction have much in common and individually received much attention. Surprisingly, until now they progressed in parallel and remained mostly separate. Here, we employ new tools from probability in Banach spaces that were successfully used in the context of sparse reconstruction to advance on an open problem in random pojection. In particular, we generalize and use an intricate result by Rudelson and Vershynin for sparse reconstruction which uses Dudley's theorem for bounding Gaussian processes. Our main result states that any set of $N = \exp(\tilde{O}(n))$ real vectors in $n$ dimensional space can be linearly mapped to a space of dimension $k=O(\log N\polylog(n))$, while (1) preserving the pairwise distances among the vectors to within any constant distortion and (2) being able to apply the transformation in time $O(n\log n)$ on each vector. This improves on the best known $N = \exp(\tilde{O}(n^{1/2}))$ achieved by Ailon and Liberty and $N = \exp(\tilde{O}(n^{1/3}))$ by Ailon and Chazelle. The dependence in the distortion constant however is believed to be suboptimal and subject to further investigation. For constant distortion, this settles the open question posed by these authors up to a $\polylog(n)$ factor while considerably simplifying their constructions. | Almost Optimal Unrestricted Fast Johnson-Lindenstrauss Transform | 4,427 |
The Generalized Traveling Salesman Problem (GTSP) is a well-known combinatorial optimization problem with a host of applications. It is an extension of the Traveling Salesman Problem (TSP) where the set of cities is partitioned into so-called clusters, and the salesman has to visit every cluster exactly once. While the GTSP is a very important combinatorial optimization problem and is well studied in many aspects, the local search algorithms used in the literature are mostly basic adaptations of simple TSP heuristics. Hence, a thorough and deep research of the neighborhoods and local search algorithms specific to the GTSP is required. We formalize the procedure of adaptation of a TSP neighborhood for the GTSP and classify all other existing and some new GTSP neighborhoods. For every neighborhood, we provide efficient exploration algorithms that are often significantly faster than the ones known from the literature. Finally, we compare different local search implementations empirically. | Efficient Local Search Algorithms for Known and New Neighborhoods for
the Generalized Traveling Salesman Problem | 4,428 |
Given an n x n matrix A, we present a simple, element-wise sparsification algorithm that zeroes out all sufficiently small elements of A and then retains some of the remaining elements with probabilities proportional to the square of their magnitudes. We analyze the approximation accuracy of the proposed algorithm using a recent, elegant non-commutative Bernstein inequality, and compare our bounds with all existing (to the best of our knowledge) element-wise matrix sparsification algorithms. | A Note on Element-wise Matrix Sparsification via a Matrix-valued
Bernstein Inequality | 4,429 |
Analysing Web graphs has applications in determining page ranks, fighting Web spam, detecting communities and mirror sites, and more. This study is however hampered by the necessity of storing a major part of huge graphs in the external memory, which prevents efficient random access to edge (hyperlink) lists. A number of algorithms involving compression techniques have thus been presented, to represent Web graphs succinctly but also providing random access. Those techniques are usually based on differential encodings of the adjacency lists, finding repeating nodes or node regions in the successive lists, more general grammar-based transformations or 2-dimensional representations of the binary matrix of the graph. In this paper we present two Web graph compression algorithms. The first can be seen as engineering of the Boldi and Vigna (2004) method. We extend the notion of similarity between link lists, and use a more compact encoding of residuals. The algorithm works on blocks of varying size (in the number of input lines) and sacrifices access time for better compression ratio, achieving more succinct graph representation than other algorithms reported in the literature. The second algorithm works on blocks of the same size, in the number of input lines, and its key mechanism is merging the block into a single ordered list. This method achieves much more attractive space-time tradeoffs. | Tight and simple Web graph compression | 4,430 |
A technique using a systolic array structure is proposed for solving the common approximate substring (CAS) problem. This approach extends the technique introduced in earlier work from the computation of the edit-distance between two strings to the more encompassing CAS problem. A comparison to existing work is given, and the technique presented is validated and analyzed based on simulations. | Systolic Array Technique for Determining Common Approximate Substrings | 4,431 |
In this paper we show that set-intersection is harder than distance oracle on sparse graphs. Given a collection of total size n which consists of m sets drawn from universe U, the set-intersection problem is to build a data structure which can answer whether two sets have any intersection. A distance oracle is a data structure which can answer distance queries on a given graph. We show that if one can build distance oracle for sparse graph G=(V,E), which requires s(|V|,|E|) space and answers a (2-\epsilon,c)-approximate distance query in time t(|V|,|E|) where (2-\epsilon) is a multiplicative error and c is a constant additive error, then, set-intersection can be solved in t(m+|U|,n) time using s(m+|U|,n) space. | On the hardness of distance oracle for sparse graph | 4,432 |
Cuckoo hashing is an efficient technique for creating large hash tables with high space utilization and guaranteed constant access times. There, each item can be placed in a location given by any one out of k different hash functions. In this paper we investigate further the random walk heuristic for inserting in an online fashion new items into the hash table. Provided that k > 2 and that the number of items in the table is below (but arbitrarily close) to the theoretically achievable load threshold, we show a polylogarithmic bound for the maximum insertion time that holds with high probability. | On the Insertion Time of Cuckoo Hashing | 4,433 |
We give a complete structural characterisation of the map the positive branch of a one-way pattern implements. We start with the representation of the positive branch in terms of the phase map decomposition, which is then further analysed to obtain the primary structure of the matrix M, representing the phase map decomposition in the computational basis. Using this approach we obtain some preliminary results on the connection between the columns structure of a given unitary and the angles of measurements in a pattern that implements it. We believe this work is a step forward towards a full characterisation of those unitaries with an efficient one-way model implementation. | Algebraic characterisation of one-way patterns | 4,434 |
We consider the problem of minimizing a function represented as a sum of submodular terms. We assume each term allows an efficient computation of {\em exchange capacities}. This holds, for example, for terms depending on a small number of variables, or for certain cardinality-dependent terms. A naive application of submodular minimization algorithms would not exploit the existence of specialized exchange capacity subroutines for individual terms. To overcome this, we cast the problem as a {\em submodular flow} (SF) problem in an auxiliary graph, and show that applying most existing SF algorithms would rely only on these subroutines. We then explore in more detail Iwata's capacity scaling approach for submodular flows (Math. Programming, 76(2):299--308, 1997). In particular, we show how to improve its complexity in the case when the function contains cardinality-dependent terms. | Minimizing a sum of submodular functions | 4,435 |
A graph is a data structure composed of dots (i.e. vertices) and lines (i.e. edges). The dots and lines of a graph can be organized into intricate arrangements. The ability for a graph to denote objects and their relationships to one another allow for a surprisingly large number of things to be modeled as a graph. From the dependencies that link software packages to the wood beams that provide the framing to a house, most anything has a corresponding graph representation. However, just because it is possible to represent something as a graph does not necessarily mean that its graph representation will be useful. If a modeler can leverage the plethora of tools and algorithms that store and process graphs, then such a mapping is worthwhile. This article explores the world of graphs in computing and exposes situations in which graphical models are beneficial. | Constructions from Dots and Lines | 4,436 |
Raghavendra (STOC 2008) gave an elegant and surprising result: if Khot's Unique Games Conjecture (STOC 2002) is true, then for every constraint satisfaction problem (CSP), the best approximation ratio is attained by a certain simple semidefinite programming and a rounding scheme for it. In this paper, we show that similar results hold for constant-time approximation algorithms in the bounded-degree model. Specifically, we present the followings: (i) For every CSP, we construct an oracle that serves an access, in constant time, to a nearly optimal solution to a basic LP relaxation of the CSP. (ii) Using the oracle, we give a constant-time rounding scheme that achieves an approximation ratio coincident with the integrality gap of the basic LP. (iii) Finally, we give a generic conversion from integrality gaps of basic LPs to hardness results. All of those results are \textit{unconditional}. Therefore, for every bounded-degree CSP, we give the best constant-time approximation algorithm among all. A CSP instance is called $\epsilon$-far from satisfiability if we must remove at least an $\epsilon$-fraction of constraints to make it satisfiable. A CSP is called testable if there is a constant-time algorithm that distinguishes satisfiable instances from $\epsilon$-far instances with probability at least $2/3$. Using the results above, we also derive, under a technical assumption, an equivalent condition under which a CSP is testable in the bounded-degree model. | Optimal Constant-Time Approximation Algorithms and (Unconditional)
Inapproximability Results for Every Bounded-Degree CSP | 4,437 |
Deciding whether a graph can be embedded in a grid using only unit-length edges is NP-complete, even when restricted to binary trees. However, it is not difficult to devise a number of graph classes for which the problem is polynomial, even trivial. A natural step, outstanding thus far, was to provide a broad classification of graphs that make for polynomial or NP-complete instances. We provide such a classification based on the set of allowed vertex degrees in the input graphs, yielding a full dichotomy on the complexity of the problem. As byproducts, the previous NP-completeness result for binary trees was strengthened to strictly binary trees, and the three-dimensional version of the problem was for the first time proven to be NP-complete. Our results were made possible by introducing the concepts of consistent orientations and robust gadgets, and by showing how the former allows NP-completeness proofs by local replacement even in the absence of the latter. | Complexity dichotomy on partial grid recognition | 4,438 |
In this paper we study the question of whether or not a static search tree should ever be unbalanced. We present several methods to restructure an unbalanced k-ary search tree $T$ into a new tree $R$ that preserves many of the properties of $T$ while having a height of $\log_k n +1$ which is one unit off of the optimal height. More specifically, we show that it is possible to ensure that the depth of the elements in $R$ is no more than their depth in $T$ plus at most $\log_k \log_k n +2$. At the same time it is possible to guarantee that the average access time $P(R)$ in tree $R$ is no more than the average access time $P(T)$ in tree $T$ plus $O(\log_k P(T))$. This suggests that for most applications, a balanced tree is always a better option than an unbalanced one since the balanced tree has similar average access time and much better worst case access time. | Should Static Search Trees Ever Be Unbalanced? | 4,439 |
In this paper we present several new and very practical methods and techniques for range aggregation and selection problems in multidimensional data structures and other types of sets of values. We also present some new extensions and applications for some fundamental set maintenance problems. | Practical Range Aggregation, Selection and Set Maintenance Techniques | 4,440 |
We give the first constant-factor approximation algorithm for Sparsest Cut with general demands in bounded treewidth graphs. In contrast to previous algorithms, which rely on the flow-cut gap and/or metric embeddings, our approach exploits the Sherali-Adams hierarchy of linear programming relaxations. | Approximating Sparsest Cut in Graphs of Bounded Treewidth | 4,441 |
An involution on a finite set is a bijection such as I(I(e))=e for all the element of the set. A fixed-point free involution on a finite set is an involution such as I(e)=e for none element of the set. In this article, the fixed-point free involutions are represented as partitions of the set and some properties linked to this representation are exhibited. Then an optimal algorithm to list all the fixed-point free involutions is presented. Its soundness relies on the representation of the fixed-point free involutions as partitions. Finally, an implementation of the algorithm is proposed, with an effective data representation. | An Algorithm to List All the Fixed-Point Free Involutions on a Finite
Set | 4,442 |
In this paper we describe a dynamic external memory data structure that supports range reporting queries in three dimensions in $O(\log_B^2 N + \frac{k}{B})$ I/O operations, where $k$ is the number of points in the answer and $B$ is the block size. This is the first dynamic data structure that answers three-dimensional range reporting queries in $\log_B^{O(1)} N + O(\frac{k}{B})$ I/Os. | Dynamic Range Reporting in External Memory | 4,443 |
We study the extremal competitive ratio of Boolean function evaluation. We provide the first non-trivial lower and upper bounds for classes of Boolean functions which are not included in the class of monotone Boolean functions. For the particular case of symmetric functions our bounds are matching and we exactly characterize the best possible competitiveness achievable by a deterministic algorithm. Our upper bound is obtained by a simple polynomial time algorithm. | Competitive Boolean Function Evaluation: Beyond Monotonicity, and the
Symmetric Case | 4,444 |
We obtain polynomial-time approximation-preserving reductions (up to a factor of 1 + \epsilon) from the prize-collecting Steiner tree and prize-collecting Steiner forest problems in planar graphs to the corresponding problems in graphs of bounded treewidth. We also give an exact algorithm for the prize-collecting Steiner tree problem that runs in polynomial time for graphs of bounded treewidth. This, combined with our reductions, yields a PTAS for the prize-collecting Steiner tree problem in planar graphs and generalizes the PTAS of Borradaile, Klein and Mathieu for the Steiner tree problem in planar graphs. Our results build upon the ideas of Borradaile, Klein and Mathieu and the work of Bateni, Hajiaghayi and Marx on a PTAS for the Steiner forest problem in planar graphs. Our main technical result is on the properties of primal-dual algorithms for Steiner tree and forest problems in general graphs when they are run with scaled up penalties. | Prize-Collecting Steiner Tree and Forest in Planar Graphs | 4,445 |
The notion of vertex sparsification is introduced in \cite{M}, where it was shown that for any graph $G = (V, E)$ and a subset of $k$ terminals $K \subset V$, there is a polynomial time algorithm to construct a graph $H = (K, E_H)$ on just the terminal set so that simultaneously for all cuts $(A, K-A)$, the value of the minimum cut in $G$ separating $A$ from $K -A$ is approximately the same as the value of the corresponding cut in $H$. We give the first super-constant lower bounds for how well a cut-sparsifier $H$ can simultaneously approximate all minimum cuts in $G$. We prove a lower bound of $\Omega(\log^{1/4} k)$ -- this is polynomially-related to the known upper bound of $O(\log k/\log \log k)$. This is an exponential improvement on the $\Omega(\log \log k)$ bound given in \cite{LM} which in fact was for a stronger vertex sparsification guarantee, and did not apply to cut sparsifiers. Despite this negative result, we show that for many natural problems, we do not need to incur a multiplicative penalty for our reduction. We obtain optimal $O(\log k)$-competitive Steiner oblivious routing schemes, which generalize the results in \cite{R}. We also demonstrate that for a wide range of graph packing problems (which includes maximum concurrent flow, maximum multiflow and multicast routing, among others, as a special case), the integrality gap of the linear program is always at most $O(\log k)$ times the integrality gap restricted to trees. This result helps to explain the ubiquity of the $O(\log k)$ guarantees for such problems. Lastly, we use our ideas to give an efficient construction for vertex-sparsifiers that match the current best existential results -- this was previously open. Our algorithm makes novel use of Earth-mover constraints. | Vertex Sparsifiers and Abstract Rounding Algorithms | 4,446 |
Given a capacitated graph $G = (V,E)$ and a set of terminals $K \subseteq V$, how should we produce a graph $H$ only on the terminals $K$ so that every (multicommodity) flow between the terminals in $G$ could be supported in $H$ with low congestion, and vice versa? (Such a graph $H$ is called a flow-sparsifier for $G$.) What if we want $H$ to be a "simple" graph? What if we allow $H$ to be a convex combination of simple graphs? Improving on results of Moitra [FOCS 2009] and Leighton and Moitra [STOC 2010], we give efficient algorithms for constructing: (a) a flow-sparsifier $H$ that maintains congestion up to a factor of $O(\log k/\log \log k)$, where $k = |K|$, (b) a convex combination of trees over the terminals $K$ that maintains congestion up to a factor of $O(\log k)$, and (c) for a planar graph $G$, a convex combination of planar graphs that maintains congestion up to a constant factor. This requires us to give a new algorithm for the 0-extension problem, the first one in which the preimages of each terminal are connected in $G$. Moreover, this result extends to minor-closed families of graphs. Our improved bounds immediately imply improved approximation guarantees for several terminal-based cut and ordering problems. | Vertex Sparsifiers: New Results from Old Techniques | 4,447 |
We study vertex cut and flow sparsifiers that were recently introduced by Moitra, and Leighton and Moitra. We improve and generalize their results. We give a new polynomial-time algorithm for constructing O(log k / log log k) cut and flow sparsifiers, matching the best existential upper bound on the quality of a sparsifier, and improving the previous algorithmic upper bound of O(log^2 k / log log k). We show that flow sparsifiers can be obtained from linear operators approximating minimum metric extensions. We introduce the notion of (linear) metric extension operators, prove that they exist, and give an exact polynomial-time algorithm for finding optimal operators. We then establish a direct connection between flow and cut sparsifiers and Lipschitz extendability of maps in Banach spaces, a notion studied in functional analysis since 1930s. Using this connection, we prove a lower bound of Omega(sqrt{log k/log log k}) for flow sparsifiers and a lower bound of Omega(sqrt{log k}/log log k) for cut sparsifiers. We show that if a certain open question posed by Ball in 1992 has a positive answer, then there exist \tilde O(sqrt{log k}) cut sparsifiers. On the other hand, any lower bound on cut sparsifiers better than \tilde Omega(sqrt{log k}) would imply a negative answer to this question. | Metric Extension Operators, Vertex Sparsifiers and Lipschitz
Extendability | 4,448 |
Sequence assembly from short reads is an important problem in biology. It is known that solving the sequence assembly problem exactly on a bi-directed de Bruijn graph or a string graph is intractable. However finding a Shortest Double stranded DNA string (SDDNA) containing all the k-long words in the reads seems to be a good heuristic to get close to the original genome. This problem is equivalent to finding a cyclic Chinese Postman (CP) walk on the underlying un-weighted bi-directed de Bruijn graph built from the reads. The Chinese Postman walk Problem (CPP) is solved by reducing it to a general bi-directed flow on this graph which runs in O(|E|2 log2(|V |)) time. In this paper we show that the cyclic CPP on bi-directed graphs can be solved without reducing it to bi-directed flow. We present a ?(p(|V | + |E|) log(|V |) + (dmaxp)3) time algorithm to solve the cyclic CPP on a weighted bi-directed de Bruijn graph, where p = max{|{v|din(v) - dout(v) > 0}|, |{v|din(v) - dout(v) < 0}|} and dmax = max{|din(v) - dout(v)}. Our algorithm performs asymptotically better than the bidirected flow algorithm when the number of imbalanced nodes p is much less than the nodes in the bi-directed graph. From our experimental results on various datasets, we have noticed that the value of p/|V | lies between 0.08% and 0.13% with 95% probability. | An Efficient Algorithm For Chinese Postman Walk on Bi-directed de Bruijn
Graphs | 4,449 |
We study the use of sampling for efficiently mining the top-K frequent itemsets of cardinality at most w. To this purpose, we define an approximation to the top-K frequent itemsets to be a family of itemsets which includes (resp., excludes) all very frequent (resp., very infrequent) itemsets, together with an estimate of these itemsets' frequencies with a bounded error. Our first result is an upper bound on the sample size which guarantees that the top-K frequent itemsets mined from a random sample of that size approximate the actual top-K frequent itemsets, with probability larger than a specified value. We show that the upper bound is asymptotically tight when w is constant. Our main algorithmic contribution is a progressive sampling approach, combined with suitable stopping conditions, which on appropriate inputs is able to extract approximate top-K frequent itemsets from samples whose sizes are smaller than the general upper bound. In order to test the stopping conditions, this approach maintains the frequency of all itemsets encountered, which is practical only for small w. However, we show how this problem can be mitigated by using a variation of Bloom filters. A number of experiments conducted on both synthetic and real bench- mark datasets show that using samples substantially smaller than the original dataset (i.e., of size defined by the upper bound or reached through the progressive sampling approach) enable to approximate the actual top-K frequent itemsets with accuracy much higher than what analytically proved. | Mining Top-K Frequent Itemsets Through Progressive Sampling | 4,450 |
We consider the problem of choosing Euclidean points to maximize the sum of their weighted pairwise distances, when each point is constrained to a ball centered at the origin. We derive a dual minimization problem and show strong duality holds (i.e., the resulting upper bound is tight) when some locally optimal configuration of points is affinely independent. We sketch a polynomial time algorithm for finding a near-optimal set of points. | A Bound on the Sum of Weighted Pairwise Distances of Points Constrained
to Balls | 4,451 |
Following previous theoretical work by Srinivasan (FOCS 2001) and the first author (STACS 2006) and a first experimental evaluation on random instances (ALENEX 2009), we investigate how the recently developed different approaches to generate randomized roundings satisfying disjoint cardinality constraints behave when used in two classical algorithmic problems, namely low-congestion routing in networks and max-coverage problems in hypergraphs. We generally find that all randomized rounding algorithms work well, much better than what is guaranteed by existing theoretical work. The derandomized versions produce again significantly better rounding errors, with running times still negligible compared to the one for solving the corresponding LP. It thus seems worth preferring them over the randomized variants. The data created in these experiments lets us propose and investigate the following new ideas. For the low-congestion routing problems, we suggest to solve a second LP, which yields the same congestion, but aims at producing a solution that is easier to round. Experiments show that this reduces the rounding errors considerably, both in combination with randomized and derandomized rounding. For the max-coverage instances, we generally observe that the greedy heuristics also performs very good. We develop a strengthened method of derandomized rounding, and a simple greedy/rounding hybrid approach using greedy and LP-based rounding elements, and observe that both these improvements yield again better solutions than both earlier approaches on their own. For unit disk max-domination, we also develop a PTAS. Contrary to all other algorithms investigated, it performs not much better in experiments than in theory; thus, unless extremely good solutions are to be obtained with huge computational resources, greedy, LP-based rounding or hybrid approaches are preferable. | Randomized Rounding for Routing and Covering Problems: Experiments and
Improvements | 4,452 |
The Traveling Tournament Problem (TTP) is a challenging combinatorial optimization problem that has attracted the interest of researchers around the world. This paper proposes an improved search neighbourhood for the TTP that has been tested in a simulated annealing context. The neighbourhood encompasses both feasible and infeasible schedules, and can be generated efficiently. For the largest TTP challenge problems with up to 40 teams, solutions found using this neighbourhood are the best currently known, and for smaller problems with 10 teams, three solutions found were subsequently proven optimal. | An Improved Neighbourhood for the Traveling Tournament Problem | 4,453 |
We present randomized algorithms for some well-studied, hard combinatorial problems: the k-path problem, the p-packing of q-sets problem, and the q-dimensional p-matching problem. Our algorithms solve these problems with high probability in time exponential only in the parameter (k, p, q) and using polynomial space; the constant bases of the exponentials are significantly smaller than in previous works. For example, for the k-path problem the improvement is from 2 to 1.66. We also show how to detect if a d-regular graph admits an edge coloring with $d$ colors in time within a polynomial factor of O(2^{(d-1)n/2}). Our techniques build upon and generalize some recently published ideas by I. Koutis (ICALP 2009), R. Williams (IPL 2009), and A. Bj\"orklund (STACS 2010, FOCS 2010). | Narrow sieves for parameterized paths and packings | 4,454 |
We show how one can use certain deterministic algorithms for higher-value constraint satisfaction problems (CSPs) to speed up deterministic local search for 3-SAT. This way, we improve the deterministic worst-case running time for 3-SAT to O(1.439^n). | Using CSP To Improve Deterministic 3-SAT | 4,455 |
We study the following vertex-weighted online bipartite matching problem: $G(U, V, E)$ is a bipartite graph. The vertices in $U$ have weights and are known ahead of time, while the vertices in $V$ arrive online in an arbitrary order and have to be matched upon arrival. The goal is to maximize the sum of weights of the matched vertices in $U$. When all the weights are equal, this reduces to the classic \emph{online bipartite matching} problem for which Karp, Vazirani and Vazirani gave an optimal $\left(1-\frac{1}{e}\right)$-competitive algorithm in their seminal work~\cite{KVV90}. Our main result is an optimal $\left(1-\frac{1}{e}\right)$-competitive randomized algorithm for general vertex weights. We use \emph{random perturbations} of weights by appropriately chosen multiplicative factors. Our solution constitutes the first known generalization of the algorithm in~\cite{KVV90} in this model and provides new insights into the role of randomization in online allocation problems. It also effectively solves the problem of \emph{online budgeted allocations} \cite{MSVV05} in the case when an agent makes the same bid for any desired item, even if the bid is comparable to his budget - complementing the results of \cite{MSVV05, BJN07} which apply when the bids are much smaller than the budgets. | Online Vertex-Weighted Bipartite Matching and Single-bid Budgeted
Allocations | 4,456 |
We study the maximum flow problem in directed H-minor-free graphs where H can be drawn in the plane with one crossing. If a structural decomposition of the graph as a clique-sum of planar graphs and graphs of constant complexity is given, we show that a maximum flow can be computed in O(n log n) time. In particular, maximum flows in directed K_{3,3}-minor-free graphs and directed K_5-minor-free graphs can be computed in O(n log n) time without additional assumptions. | Flows in One-Crossing-Minor-Free Graphs | 4,457 |
In the online packet buffering problem (also known as the unweighted FIFO variant of buffer management), we focus on a single network packet switching device with several input ports and one output port. This device forwards unit-size, unit-value packets from input ports to the output port. Buffers attached to input ports may accumulate incoming packets for later transmission; if they cannot accommodate all incoming packets, their excess is lost. A packet buffering algorithm has to choose from which buffers to transmit packets in order to minimize the number of lost packets and thus maximize the throughput. We present a tight lower bound of e/(e-1) ~ 1.582 on the competitive ratio of the throughput maximization, which holds even for fractional or randomized algorithms. This improves the previously best known lower bound of 1.4659 and matches the performance of the algorithm Random Schedule. Our result contradicts the claimed performance of the algorithm Random Permutation; we point out a flaw in its original analysis. | An Optimal Lower Bound for Buffer Management in Multi-Queue Switches | 4,458 |
We consider the problem of maximizing a nonnegative (possibly non-monotone) submodular set function with or without constraints. Feige et al. [FOCS'07] showed a 2/5-approximation for the unconstrained problem and also proved that no approximation better than 1/2 is possible in the value oracle model. Constant-factor approximation was also given for submodular maximization subject to a matroid independence constraint (a factor of 0.309 Vondrak [FOCS'09]) and for submodular maximization subject to a matroid base constraint, provided that the fractional base packing number is at least 2 (a 1/4-approximation, Vondrak [FOCS'09]). In this paper, we propose a new algorithm for submodular maximization which is based on the idea of {\em simulated annealing}. We prove that this algorithm achieves improved approximation for two problems: a 0.41-approximation for unconstrained submodular maximization, and a 0.325-approximation for submodular maximization subject to a matroid independence constraint. On the hardness side, we show that in the value oracle model it is impossible to achieve a 0.478-approximation for submodular maximization subject to a matroid independence constraint, or a 0.394-approximation subject to a matroid base constraint in matroids with two disjoint bases. Even for the special case of cardinality constraint, we prove it is impossible to achieve a 0.491-approximation. (Previously it was conceivable that a 1/2-approximation exists for these problems.) It is still an open question whether a 1/2-approximation is possible for unconstrained submodular maximization. | Submodular Maximization by Simulated Annealing | 4,459 |
We consider the online stochastic matching problem proposed by Feldman et al. [FMMM09] as a model of display ad allocation. We are given a bipartite graph; one side of the graph corresponds to a fixed set of bins and the other side represents the set of possible ball types. At each time step, a ball is sampled independently from the given distribution and it needs to be matched upon its arrival to an empty bin. The goal is to maximize the number of allocations. We present an online algorithm for this problem with a competitive ratio of 0.702. Before our result, algorithms with a competitive ratio better than $1-1/e$ were known under the assumption that the expected number of arriving balls of each type is integral. A key idea of the algorithm is to collect statistics about the decisions of the optimum offline solution using Monte Carlo sampling and use those statistics to guide the decisions of the online algorithm. We also show that our algorithm achieves a competitive ratio of 0.705 when the rates are integral. On the hardness side, we prove that no online algorithm can have a competitive ratio better than 0.823 under the known distribution model (and henceforth under the permutation model). This improves upon the 5/6 hardness result proved by Goel and Mehta \cite{GM08} for the permutation model. | Online Stochastic Matching: Online Actions Based on Offline Statistics | 4,460 |
We present an efficient algorithm to find non-empty minimizers of a symmetric submodular function over any family of sets closed under inclusion. This for example includes families defined by a cardinality constraint, a knapsack constraint, a matroid independence constraint, or any combination of such constraints. Our algorithm make $O(n^3)$ oracle calls to the submodular function where $n$ is the cardinality of the ground set. In contrast, the problem of minimizing a general submodular function under a cardinality constraint is known to be inapproximable within $o(\sqrt{n/\log n})$ (Svitkina and Fleischer [2008]). The algorithm is similar to an algorithm of Nagamochi and Ibaraki [1998] to find all nontrivial inclusionwise minimal minimizers of a symmetric submodular function over a set of cardinality $n$ using $O(n^3)$ oracle calls. Their procedure in turn is based on Queyranne's algorithm [1998] to minimize a symmetric submodular | Symmetric Submodular Function Minimization Under Hereditary Family
Constraints | 4,461 |
In the Matroid Secretary Problem, introduced by Babaioff et al. [SODA 2007], the elements of a given matroid are presented to an online algorithm in random order. When an element is revealed, the algorithm learns its weight and decides whether or not to select it under the restriction that the selected elements form an independent set in the matroid. The objective is to maximize the total weight of the chosen elements. In the most studied version of this problem, the algorithm has no information about the weights beforehand. We refer to this as the zero information model. In this paper we study a different model, also proposed by Babaioff et al., in which the relative order of the weights is random in the matroid. To be precise, in the random assignment model, an adversary selects a collection of weights that are randomly assigned to the elements of the matroid. Later, the elements are revealed to the algorithm in a random order independent of the assignment. Our main result is the first constant competitive algorithm for the matroid secretary problem in the random assignment model. This solves an open question of Babaioff et al. Our algorithm achieves a competitive ratio of $2e^2/(e-1)$. It exploits the notion of principal partition of a matroid, its decomposition into uniformly dense minors, and a $2e$-competitive algorithm for uniformly dense matroids we also develop. As additional results, we present simple constant competitive algorithms in the zero information model for various classes of matroids including cographic, low density and the case when every element is in a small cocircuit. In the same model, we also give a $ke$-competitive algorithm for $k$-column sparse linear matroids, and a new $O(\log r)$-competitive algorithm for general matroids of rank $r$ which only uses the relative order of the weights seen and not their numerical value, as previously needed. | Matroid Secretary Problem in the Random Assignment Model | 4,462 |
The replacement paths problem for directed graphs is to find for given nodes s and t and every edge e on the shortest path between them, the shortest path between s and t which avoids e. For unweighted directed graphs on n vertices, the best known algorithm runtime was \tilde{O}(n^{2.5}) by Roditty and Zwick. For graphs with integer weights in {-M,...,M}, Weimann and Yuster recently showed that one can use fast matrix multiplication and solve the problem in O(Mn^{2.584}) time, a runtime which would be O(Mn^{2.33}) if the exponent \omega of matrix multiplication is 2. We improve both of these algorithms. Our new algorithm also relies on fast matrix multiplication and runs in O(M n^{\omega} polylog(n)) time if \omega>2 and O(n^{2+\eps}) for any \eps>0 if \omega=2. Our result shows that, at least for small integer weights, the replacement paths problem in directed graphs may be easier than the related all pairs shortest paths problem in directed graphs, as the current best runtime for the latter is \Omega(n^{2.5}) time even if \omega=2. | Faster Replacement Paths | 4,463 |
Let us call a sequence of numbers heapable if they can be sequentially inserted to form a binary tree with the heap property, where each insertion subsequent to the first occurs at a leaf of the tree, i.e. below a previously placed number. In this paper we consider a variety of problems related to heapable sequences and subsequences that do not appear to have been studied previously. Our motivation for introducing these concepts is two-fold. First, such problems correspond to natural extensions of the well-known secretary problem for hiring an organization with a hierarchical structure. Second, from a purely combinatorial perspective, our problems are interesting variations on similar longest increasing subsequence problems, a problem paradigm that has led to many deep mathematical connections. We provide several basic results. We obtain an efficient algorithm for determining the heapability of a sequence, and also prove that the question of whether a sequence can be arranged in a complete binary heap is NP-hard. Regarding subsequences we show that, with high probability, the longest heapable subsequence of a random permutation of n numbers has length (1 - o(1)) n, and a subsequence of length (1 - o(1)) n can in fact be found online with high probability. We similarly show that for a random permutation a subsequence that yields a complete heap of size \alpha n for a constant \alpha can be found with high probability. Our work highlights the interesting structure underlying this class of subsequence problems, and we leave many further interesting variations open for future work. | Heapable Sequences and Subsequences | 4,464 |
We study the problem of ranking with submodular valuations. An instance of this problem consists of a ground set $[m]$, and a collection of $n$ monotone submodular set functions $f^1, \ldots, f^n$, where each $f^i: 2^{[m]} \to R_+$. An additional ingredient of the input is a weight vector $w \in R_+^n$. The objective is to find a linear ordering of the ground set elements that minimizes the weighted cover time of the functions. The cover time of a function is the minimal number of elements in the prefix of the linear ordering that form a set whose corresponding function value is greater than a unit threshold value. Our main contribution is an $O(\ln(1 / \epsilon))$-approximation algorithm for the problem, where $\epsilon$ is the smallest non-zero marginal value that any function may gain from some element. Our algorithm orders the elements using an adaptive residual updates scheme, which may be of independent interest. We also prove that the problem is $\Omega(\ln(1 / \epsilon))$-hard to approximate, unless P = NP. This implies that the outcome of our algorithm is optimal up to constant factors. | Ranking with Submodular Valuations | 4,465 |
A natural probabilistic model for motif discovery has been used to experimentally test the quality of motif discovery programs. In this model, there are $k$ background sequences, and each character in a background sequence is a random character from an alphabet $\Sigma$. A motif $G=g_1g_2...g_m$ is a string of $m$ characters. Each background sequence is implanted a probabilistically generated approximate copy of $G$. For a probabilistically generated approximate copy $b_1b_2...b_m$ of $G$, every character $b_i$ is probabilistically generated such that the probability for $b_i\neq g_i$ is at most $\alpha$. We develop three algorithms that under the probabilistic model can find the implanted motif with high probability via a tradeoff between computational time and the probability of mutation. The methods developed in this paper have been used in the software implementation. We observed some encouraging results that show improved performance for motif detection compared with other softwares. | Sublinear Time Motif Discovery from Multiple Sequences | 4,466 |
Query evaluation in an XML database requires reconstructing XML subtrees rooted at nodes found by an XML query. Since XML subtree reconstruction can be expensive, one approach to improve query response time is to use reconstruction views - materialized XML subtrees of an XML document, whose nodes are frequently accessed by XML queries. For this approach to be efficient, the principal requirement is a framework for view selection. In this work, we are the first to formalize and study the problem of XML reconstruction view selection. The input is a tree $T$, in which every node $i$ has a size $c_i$ and profit $p_i$, and the size limitation $C$. The target is to find a subset of subtrees rooted at nodes $i_1,\cdots, i_k$ respectively such that $c_{i_1}+\cdots +c_{i_k}\le C$, and $p_{i_1}+\cdots +p_{i_k}$ is maximal. Furthermore, there is no overlap between any two subtrees selected in the solution. We prove that this problem is NP-hard and present a fully polynomial-time approximation scheme (FPTAS) as a solution. | XML Reconstruction View Selection in XML Databases: Complexity Analysis
and Approximation Scheme | 4,467 |
Motivated by applications in online dating and kidney exchange, the stochastic matching problem was introduced by Chen, Immorlica, Karlin, Mahdian and Rudra (2009). They have proven a 4-approximation of a simple greedy strategy, but conjectured that it is in fact a 2-approximation. In this paper we confirm this hypothesis. | Greedy algorithm for stochastic matching is a 2-approximation | 4,468 |
The random walk with choice is a well known variation to the random walk that first selects a subset of $d$ neighbours nodes and then decides to move to the node which maximizes the value of a certain metric; this metric captures the number of (past) visits of the walk to the node. In this paper we propose an enhancement to the random walk with choice by considering a new metric that captures not only the actual visits to a given node, but also the intensity of the visits to the neighbourhood of the node. We compare the random walk with choice with its enhanced counterpart. Simulation results show a significant improvement in cover time, maximum node load and load balancing, mainly in random geometric graphs. | Enhanced Random Walk with Choice: An Empirical Study | 4,469 |
In this paper, we consider lower bounds on the query complexity for testing CSPs in the bounded-degree model. First, for any ``symmetric'' predicate $P:{0,1}^{k} \to {0,1}$ except \equ where $k\geq 3$, we show that every (randomized) algorithm that distinguishes satisfiable instances of CSP(P) from instances $(|P^{-1}(0)|/2^k-\epsilon)$-far from satisfiability requires $\Omega(n^{1/2+\delta})$ queries where $n$ is the number of variables and $\delta>0$ is a constant that depends on $P$ and $\epsilon$. This breaks a natural lower bound $\Omega(n^{1/2})$, which is obtained by the birthday paradox. We also show that every one-sided error tester requires $\Omega(n)$ queries for such $P$. These results are hereditary in the sense that the same results hold for any predicate $Q$ such that $P^{-1}(1) \subseteq Q^{-1}(1)$. For EQU, we give a one-sided error tester whose query complexity is $\tilde{O}(n^{1/2})$. Also, for 2-XOR (or, equivalently E2LIN2), we show an $\Omega(n^{1/2+\delta})$ lower bound for distinguishing instances between $\epsilon$-close to and $(1/2-\epsilon)$-far from satisfiability. Next, for the general k-CSP over the binary domain, we show that every algorithm that distinguishes satisfiable instances from instances $(1-2k/2^k-\epsilon)$-far from satisfiability requires $\Omega(n)$ queries. The matching NP-hardness is not known, even assuming the Unique Games Conjecture or the $d$-to-$1$ Conjecture. As a corollary, for Maximum Independent Set on graphs with $n$ vertices and a degree bound $d$, we show that every approximation algorithm within a factor $d/\poly\log d$ and an additive error of $\epsilon n$ requires $\Omega(n)$ queries. Previously, only super-constant lower bounds were known. | Lower Bounds on Query Complexity for Testing Bounded-Degree CSPs | 4,470 |
In this paper we consider the following modification of the iterative search problem. We are given a tree $T$, so that a dynamic catalog $C(v)$ is associated with every tree node $v$. For any $x$ and for any node-to-root path $\pi$ in $T$, we must find the predecessor of $x$ in $\cup_{v\in \pi} C(v)$. We present a linear space dynamic data structure that supports such queries in $O(t(n)+|\pi|)$ time, where $t(n)$ is the time needed to search in one catalog and $|\pi|$ denotes the number of nodes on path $\pi$. We also consider the reporting variant of this problem, in which for any $x_1$, $x_2$ and for any path $\pi'$ all elements of $\cup_{v\in \pi'} (C(v)\cap [x_1,x_2])$ must be reported; here $\pi'$ denotes a path between an arbitrary node $v_0$ and its ancestor $v_1$. We show that such queries can be answered in $O(t(n)+|\pi'|+ k)$ time, where $k$ is the number of elements in the answer. To illustrate applications of our technique, we describe the first dynamic data structures for the stabbing-max problem, the horizontal point location problem, and the orthogonal line-segment intersection problem with optimal $O(\log n/\log \log n)$ query time and poly-logarithmic update time. | Searching in Dynamic Catalogs on a Tree | 4,471 |
We study LP-rounding approximation algorithms for metric uncapacitated facility-location problems. We first give a new analysis for the algorithm of Chudak and Shmoys, which differs from the analysis of Byrka and Aardal in that now we do not need any bound based on the solution to the dual LP program. Besides obtaining the optimal bifactor approximation as do Byrka and Aardal, we can now also show that the algorithm with scaling parameter equaling 1.58 is, in fact, an 1.58-approximation algorithm. More importantly, we suggest an approach based on additional randomization and analyses such as ours, which could achieve or approach the conjectured optimal 1.46...--approximation for this basic problem. Next, using essentially the same techniques, we obtain improved approximation algorithms in the 2-stage stochastic variant of the problem, where we must open a subset of facilities having only stochastic information about the future demand from the clients. For this problem we obtain a 2.2975-approximation algorithm in the standard setting, and a 2.4957-approximation in the more restricted, per-scenario setting. We then study robust fault-tolerant facility location, introduced by Chechik and Peleg: solutions here are designed to provide low connection cost in case of failure of up to $k$ facilities. Chechik and Peleg gave a 6.5-approximation algorithm for $k=1$ and a ($7.5k + 1.5$)-approximation algorithm for general $k$. We improve this to an LP-rounding $(k+5+4/k)$-approximation algorithm. We also observe that in case of oblivious failures the expected approximation ratio can be reduced to $k + 1.5$, and that the integrality gap of the natural LP-relaxation of the problem is at least $k + 1$. | LP-rounding algorithms for facility-location problems | 4,472 |
In this paper we initiate the study of minimizing power consumption in the broadcast scheduling model. In this setting there is a wireless transmitter. Over time requests arrive at the transmitter for pages of information. Multiple requests may be for the same page. When a page is transmitted, all requests for that page receive the transmission simulteneously. The speed the transmitter sends data at can be dynamically scaled to conserve energy. We consider the problem of minimizing flow time plus energy, the most popular scheduling metric considered in the standard scheduling model when the scheduler is energy aware. We will assume that the power consumed is modeled by an arbitrary convex function. For this problem there is a $\Omega(n)$ lower bound. Due to the lower bound, we consider the resource augmentation model of Gupta \etal \cite{GuptaKP10}. Using resource augmentation, we give a scalable algorithm. Our result also gives a scalable non-clairvoyant algorithm for minimizing weighted flow time plus energy in the standard scheduling model. | Scheduling to Minimize Energy and Flow Time in Broadcast Scheduling | 4,473 |
Given a graph G = (V,E) and an integer k, an edge modification problem for a graph property P consists in deciding whether there exists a set of edges F of size at most k such that the graph H = (V,E \vartriangle F) satisfies the property P. In the P edge-completion problem, the set F of edges is constrained to be disjoint from E; in the P edge-deletion problem, F is a subset of E; no constraint is imposed on F in the P edge-edition problem. A number of optimization problems can be expressed in terms of graph modification problems which have been extensively studied in the context of parameterized complexity. When parameterized by the size k of the edge set F, it has been proved that if P is an hereditary property characterized by a finite set of forbidden induced subgraphs, then the three P edge-modification problems are FPT. It was then natural to ask whether these problems also admit a polynomial size kernel. Using recent lower bound techniques, Kratsch and Wahlstrom answered this question negatively. However, the problem remains open on many natural graph classes characterized by forbidden induced subgraphs. Kratsch and Wahlstrom asked whether the result holds when the forbidden subgraphs are paths or cycles and pointed out that the problem is already open in the case of P4-free graphs (i.e. cographs). This paper provides positive and negative results in that line of research. We prove that parameterized cograph edge modification problems have cubic vertex kernels whereas polynomial kernels are unlikely to exist for the Pl-free and Cl-free edge-deletion problems for large enough l. | On the (non-)existence of polynomial kernels for Pl-free edge
modification problems | 4,474 |
We give a space-optimal algorithm with update time O(log^2(1/eps)loglog(1/eps)) for (1+eps)-approximating the pth frequency moment, 0 < p < 2, of a length-n vector updated in a data stream. This provides a nearly exponential improvement in the update time complexity over the previous space-optimal algorithm of [Kane-Nelson-Woodruff, SODA 2010], which had update time Omega(1/eps^2). | Fast Moment Estimation in Data Streams in Optimal Space | 4,475 |
In this work we introduce a new linear time compression algorithm, called "Re-pair for Trees", which compresses ranked ordered trees using linear straight-line context-free tree grammars. Such grammars generalize straight-line context-free string grammars and allow basic tree operations, like traversal along edges, to be executed without prior decompression. Our algorithm can be considered as a generalization of the "Re-pair" algorithm developed by N. Jesper Larsson and Alistair Moffat in 2000. The latter algorithm is a dictionary-based compression algorithm for strings. We also introduce a succinct coding which is specialized in further compressing the grammars generated by our algorithm. This is accomplished without loosing the ability do directly execute queries on this compressed representation of the input tree. Finally, we compare the grammars and output files generated by a prototype of the Re-pair for Trees algorithm with those of similar compression algorithms. The obtained results show that that our algorithm outperforms its competitors in terms of compression ratio, runtime and memory usage. | Tree structure compression with RePair | 4,476 |
For every list of integers x_1, ..., x_m there is some j such that x_1 + ... + x_j - x_{j+1} - ... - x_m \approx 0. So the list can be nearly balanced and for this we only need one alternation between addition and subtraction. But what if the x_i are k-dimensional integer vectors? Using results from topological degree theory we show that balancing is still possible, now with k alternations. This result is useful in multi-objective optimization, as it allows a polynomial-time computable balance of two alternatives with conflicting costs. The application to two multi-objective optimization problems yields the following results: - A randomized 1/2-approximation for multi-objective maximum asymmetric traveling salesman, which improves and simplifies the best known approximation for this problem. - A deterministic 1/2-approximation for multi-objective maximum weighted satisfiability. | Balanced Combinations of Solutions in Multi-Objective Optimization | 4,477 |
We give a time-randomness tradeoff for the quasi-random rumor spreading protocol proposed by Doerr, Friedrich and Sauerwald [SODA 2008] on complete graphs. In this protocol, the goal is to spread a piece of information originating from one vertex throughout the network. Each vertex is assumed to have a (cyclic) list of its neighbors. Once a vertex is informed by one of its neighbors, it chooses a position in its list uniformly at random and then informs its neighbors starting from that position and proceeding in order of the list. Angelopoulos, Doerr, Huber and Panagiotou [Electron.~J.~Combin.~2009] showed that after $(1+o(1))(\log_2 n + \ln n)$ rounds, the rumor will have been broadcasted to all nodes with probability $1 - o(1)$. We study the broadcast time when the amount of randomness available at each node is reduced in natural way. In particular, we prove that if each node can only make its initial random selection from every $\ell$-th node on its list, then there exists lists such that $(1-\varepsilon) (\log_2 n + \ln n - \log_2 \ell - \ln \ell)+\ell-1$ steps are needed to inform every vertex with probability at least $1-O\bigl(\exp\bigl(-\frac{n^\varepsilon}{2\ln n}\bigr)\bigr)$. This shows that a further reduction of the amount of randomness used in a simple quasi-random protocol comes at a loss of efficiency. | Quasi-Random Rumor Spreading: Reducing Randomness Can Be Costly | 4,478 |
We present a Monte Carlo algorithm for Hamiltonicity detection in an $n$-vertex undirected graph running in $O^*(1.657^{n})$ time. To the best of our knowledge, this is the first superpolynomial improvement on the worst case runtime for the problem since the $O^*(2^n)$ bound established for TSP almost fifty years ago (Bellman 1962, Held and Karp 1962). It answers in part the first open problem in Woeginger's 2003 survey on exact algorithms for NP-hard problems. For bipartite graphs, we improve the bound to $O^*(1.414^{n})$ time. Both the bipartite and the general algorithm can be implemented to use space polynomial in $n$. We combine several recently resurrected ideas to get the results. Our main technical contribution is a new reduction inspired by the algebraic sieving method for $k$-Path (Koutis ICALP 2008, Williams IPL 2009). We introduce the Labeled Cycle Cover Sum in which we are set to count weighted arc labeled cycle covers over a finite field of characteristic two. We reduce Hamiltonicity to Labeled Cycle Cover Sum and apply the determinant summation technique for Exact Set Covers (Bj\"orklund STACS 2010) to evaluate it. | Determinant Sums for Undirected Hamiltonicity | 4,479 |
We focus the use of \emph{row sampling} for approximating matrix algorithms. We give applications to matrix multipication; sparse matrix reconstruction; and, \math{\ell_2} regression. For a matrix \math{\matA\in\R^{m\times d}} which represents \math{m} points in \math{d\ll m} dimensions, all of these tasks can be achieved in \math{O(md^2)} via the singular value decomposition (SVD). For appropriate row-sampling probabilities (which typically depend on the norms of the rows of the \math{m\times d} left singular matrix of \math{\matA} (the \emph{leverage scores}), we give row-sampling algorithms with linear (up to polylog factors) dependence on the stable rank of \math{\matA}. This result is achieved through the application of non-commutative Bernstein bounds. We then give, to our knowledge, the first algorithms for computing approximations to the appropriate row-sampling probabilities without going through the SVD of \math{\matA}. Thus, these are the first \math{o(md^2)} algorithms for row-sampling based approximations to the matrix algorithms which use leverage scores as the sampling probabilities. The techniques we use to approximate sampling according to the leverage scores uses some powerful recent results in the theory of random projections for embedding, and may be of some independent interest. We confess that one may perform all these matrix tasks more efficiently using these same random projection methods, however the resulting algorithms are in terms of a small number of linear combinations of all the rows. In many applications, the actual rows of \math{\matA} have some physical meaning and so methods based on a small number of the actual rows are of interest. | Row Sampling for Matrix Algorithms via a Non-Commutative Bernstein Bound | 4,480 |
We present an approximate distance oracle for a point set S with n points and doubling dimension {\lambda}. For every {\epsilon}>0, the oracle supports (1+{\epsilon})-approximate distance queries in (universal) constant time, occupies space [{\epsilon}^{-O({\lambda})} + 2^{O({\lambda} log {\lambda})}]n, and can be constructed in [2^{O({\lambda})} log3 n + {\epsilon}^{-O({\lambda})} + 2^{O({\lambda} log {\lambda})}]n expected time. This improves upon the best previously known constructions, presented by Har-Peled and Mendel. Furthermore, the oracle can be made fully dynamic with expected O(1) query time and only 2^{O({\lambda})} log n + {\epsilon}^{-O({\lambda})} + 2^{O({\lambda} log {\lambda})} update time. This is the first fully dynamic (1+{\epsilon})-distance oracle. | Fast, precise and dynamic distance queries | 4,481 |
Given n elements with nonnegative integer weights w1,..., wn and an integer capacity C, we consider the counting version of the classic knapsack problem: find the number of distinct subsets whose weights add up to at most the given capacity. We give a deterministic algorithm that estimates the number of solutions to within relative error 1+-eps in time polynomial in n and 1/eps (fully polynomial approximation scheme). More precisely, our algorithm takes time O(n^3 (1/eps) log (n/eps)). Our algorithm is based on dynamic programming. Previously, randomized polynomial time approximation schemes were known first by Morris and Sinclair via Markov chain Monte Carlo techniques, and subsequently by Dyer via dynamic programming and rejection sampling. | A Deterministic Polynomial-time Approximation Scheme for Counting
Knapsack Solutions | 4,482 |
We present a general method of designing fast approximation algorithms for cut-based minimization problems in undirected graphs. In particular, we develop a technique that given any such problem that can be approximated quickly on trees, allows approximating it almost as quickly on general graphs while only losing a poly-logarithmic factor in the approximation guarantee. To illustrate the applicability of our paradigm, we focus our attention on the undirected sparsest cut problem with general demands and the balanced separator problem. By a simple use of our framework, we obtain poly-logarithmic approximation algorithms for these problems that run in time close to linear. The main tool behind our result is an efficient procedure that decomposes general graphs into simpler ones while approximately preserving the cut-flow structure. This decomposition is inspired by the cut-based graph decomposition of R\"acke that was developed in the context of oblivious routing schemes, as well as, by the construction of the ultrasparsifiers due to Spielman and Teng that was employed to preconditioning symmetric diagonally-dominant matrices. | Fast Approximation Algorithms for Cut-based Problems in Undirected
Graphs | 4,483 |
We study a discrete diffusion process introduced in some combinatorial games called FLOODIT and MADVIRUS that can be played online and whose computational complexity has been recently studied by Arthur et al (FUN'2010). The flooding dynamics used in those games can be defined for any colored graph. It has been shown in a first report (in french, hal-00509488 on HAL archive) that studying this dynamics directly on general graph is a valuable approach to understand its specificities and extract uncluttered key patterns or algorithms that can be applied with success to particular cases like the square grid of FLOODIT or the hexagonal grid of MADVIRUS, and many other classes of graphs. This report is the translation from french to english of the section in the french report showing that the variant of the problem called 2-FREE-FLOOD-IT can be solved with a polynomial algorithm, answering a question raised in the previous study of FLOODIT by Arthur et al. | 2-FREE-FLOOD-IT is polynomial | 4,484 |
The Maximum Betweenness Centrality problem (MBC) can be defined as follows. Given a graph find a $k$-element node set $C$ that maximizes the probability of detecting communication between a pair of nodes $s$ and $t$ chosen uniformly at random. It is assumed that the communication between $s$ and $t$ is realized along a shortest $s$--$t$ path which is, again, selected uniformly at random. The communication is detected if the communication path contains a node of $C$. Recently, Dolev et al. (2009) showed that MBC is NP-hard and gave a $(1-1/e)$-approximation using a greedy approach. We provide a reduction of MBC to Maximum Coverage that simplifies the analysis of the algorithm of Dolev et al. considerably. Our reduction allows us to obtain a new algorithm with the same approximation ratio for a (generalized) budgeted version of MBC. We provide tight examples showing that the analyses of both algorithms are best possible. Moreover, we prove that MBC is APX-complete and provide an exact polynomial-time algorithm for MBC on tree graphs. | Maximum Betweenness Centrality: Approximability and Tractable Cases | 4,485 |
Consider a sequence of bits where we are trying to predict the next bit from the previous bits. Assume we are allowed to say 'predict 0' or 'predict 1', and our payoff is +1 if the prediction is correct and -1 otherwise. We will say that at each point in time the loss of an algorithm is the number of wrong predictions minus the number of right predictions so far. In this paper we are interested in algorithms that have essentially zero (expected) loss over any string at any point in time and yet have small regret with respect to always predicting 0 or always predicting 1. For a sequence of length $T$ our algorithm has regret $14\epsilon T $ and loss $2\sqrt{T}e^{-\epsilon^2 T} $ in expectation for all strings. We show that the tradeoff between loss and regret is optimal up to constant factors. Our techniques extend to the general setting of $N$ experts, where the related problem of trading off regret to the best expert for regret to the `special' expert has been studied by Even-Dar et al. (COLT'07). We obtain essentially zero loss with respect to the special expert and optimal loss/regret tradeoff, improving upon the results of Even-Dar et al and settling the main question left open in their paper. The strong loss bounds of the algorithm have some surprising consequences. A simple iterative application of our algorithm gives essentially optimal regret bounds at multiple time scales, bounds with respect to $k$-shifting optima as well as regret bounds with respect to higher norms of the input sequence. | Prediction strategies without loss | 4,486 |
Kernelization algorithms for the {\sc cluster editing} problem have been a popular topic in the recent research in parameterized computation. Thus far most kernelization algorithms for this problem are based on the concept of {\it critical cliques}. In this paper, we present new observations and new techniques for the study of kernelization algorithms for the {\sc cluster editing} problem. Our techniques are based on the study of the relationship between {\sc cluster editing} and graph edge-cuts. As an application, we present an ${\cal O}(n^2)$-time algorithm that constructs a $2k$ kernel for the {\it weighted} version of the {\sc cluster editing} problem. Our result meets the best kernel size for the unweighted version for the {\sc cluster editing} problem, and significantly improves the previous best kernel of quadratic size for the weighted version of the problem. | Cluster Editing: Kernelization based on Edge Cuts | 4,487 |
We consider the following general scheduling problem: The input consists of n jobs, each with an arbitrary release time, size, and a monotone function specifying the cost incurred when the job is completed at a particular time. The objective is to find a preemptive schedule of minimum aggregate cost. This problem formulation is general enough to include many natural scheduling objectives, such as weighted flow, weighted tardiness, and sum of flow squared. Our main result is a randomized polynomial-time algorithm with an approximation ratio O(log log nP), where P is the maximum job size. We also give an O(1) approximation in the special case when all jobs have identical release times. The main idea is to reduce this scheduling problem to a particular geometric set-cover problem which is then solved using the local ratio technique and Varadarajan's quasi-uniform sampling technique. This general algorithmic approach improves the best known approximation ratios by at least an exponential factor (and much more in some cases) for essentially all of the nontrivial common special cases of this problem. Our geometric interpretation of scheduling may be of independent interest. | The Geometry of Scheduling | 4,488 |
Consider a random graph model where each possible edge $e$ is present independently with some probability $p_e$. Given these probabilities, we want to build a large/heavy matching in the randomly generated graph. However, the only way we can find out whether an edge is present or not is to query it, and if the edge is indeed present in the graph, we are forced to add it to our matching. Further, each vertex $i$ is allowed to be queried at most $t_i$ times. How should we adaptively query the edges to maximize the expected weight of the matching? We consider several matching problems in this general framework (some of which arise in kidney exchanges and online dating, and others arise in modeling online advertisements); we give LP-rounding based constant-factor approximation algorithms for these problems. Our main results are the following: We give a 4 approximation for weighted stochastic matching on general graphs, and a 3 approximation on bipartite graphs. This answers an open question from [Chen etal ICALP 09]. Combining our LP-rounding algorithm with the natural greedy algorithm, we give an improved 3.46 approximation for unweighted stochastic matching on general graphs. We introduce a generalization of the stochastic online matching problem [Feldman etal FOCS 09] that also models preference-uncertainty and timeouts of buyers, and give a constant factor approximation algorithm. | When LP is the Cure for Your Matching Woes: Improved Bounds for
Stochastic Matchings | 4,489 |
A geodesic is the shortest path between two vertices in a connected network. The geodesic is the kernel of various network metrics including radius, diameter, eccentricity, closeness, and betweenness. These metrics are the foundation of much network research and thus, have been studied extensively in the domain of single-relational networks (both in their directed and undirected forms). However, geodesics for single-relational networks do not translate directly to multi-relational, or semantic networks, where vertices are connected to one another by any number of edge labels. Here, a more sophisticated method for calculating a geodesic is necessary. This article presents a technique for calculating geodesics in semantic networks with a focus on semantic networks represented according to the Resource Description Framework (RDF). In this framework, a discrete "walker" utilizes an abstract path description called a grammar to determine which paths to include in its geodesic calculation. The grammar-based model forms a general framework for studying geodesic metrics in semantic networks. | Grammar-Based Geodesics in Semantic Networks | 4,490 |
We present techniques for maintaining subgraph frequencies in a dynamic graph, using data structures that are parameterized in terms of h, the h-index of the graph. Our methods extend previous results of Eppstein and Spiro for maintaining statistics for undirected subgraphs of size three to directed subgraphs and to subgraphs of size four. For the directed case, we provide a data structure to maintain counts for all 3-vertex induced subgraphs in O(h) amortized time per update. For the undirected case, we maintain the counts of size-four subgraphs in O(h^2) amortized time per update. These extensions enable a number of new applications in Bioinformatics and Social Networking research. | Extended h-Index Parameterized Data Structures for Computing Dynamic
Subgraph Statistics | 4,491 |
The pathwidth of a graph is a measure of how path-like the graph is. Given a graph G and an integer k, the problem of finding whether there exist at most k vertices in G whose deletion results in a graph of pathwidth at most one is NP- complete. We initiate the study of the parameterized complexity of this problem, parameterized by k. We show that the problem has a quartic vertex-kernel: We show that, given an input instance (G = (V, E), k); |V| = n, we can construct, in polynomial time, an instance (G', k') such that (i) (G, k) is a YES instance if and only if (G', k') is a YES instance, (ii) G' has O(k^{4}) vertices, and (iii) k' \leq k. We also give a fixed parameter tractable (FPT) algorithm for the problem that runs in O(7^{k} k \cdot n^{2}) time. | A Quartic Kernel for Pathwidth-One Vertex Deletion | 4,492 |
Pedigree graphs, or family trees, are typically constructed by an expensive process of examining genealogical records to determine which pairs of individuals are parent and child. New methods to automate this process take as input genetic data from a set of extant individuals and reconstruct ancestral individuals. There is a great need to evaluate the quality of these methods by comparing the estimated pedigree to the true pedigree. In this paper, we consider two main pedigree comparison problems. The first is the pedigree isomorphism problem, for which we present a linear-time algorithm for leaf-labeled pedigrees. The second is the pedigree edit distance problem, for which we present 1) several algorithms that are fast and exact in various special cases, and 2) a general, randomized heuristic algorithm. In the negative direction, we first prove that the pedigree isomorphism problem is as hard as the general graph isomorphism problem, and that the sub-pedigree isomorphism problem is NP-hard. We then show that the pedigree edit distance problem is APX-hard in general and NP-hard on leaf-labeled pedigrees. We use simulated pedigrees to compare our edit-distance algorithms to each other as well as to a branch-and-bound algorithm that always finds an optimal solution. | Comparing Pedigree Graphs | 4,493 |
An independent dominating set D of a graph G = (V,E) is a subset of vertices such that every vertex in V \ D has at least one neighbor in D and D is an independent set, i.e. no two vertices of D are adjacent in G. Finding a minimum independent dominating set in a graph is an NP-hard problem. Whereas it is hard to cope with this problem using parameterized and approximation algorithms, there is a simple exact O(1.4423^n)-time algorithm solving the problem by enumerating all maximal independent sets. In this paper we improve the latter result, providing the first non trivial algorithm computing a minimum independent dominating set of a graph in time O(1.3569^n). Furthermore, we give a lower bound of \Omega(1.3247^n) on the worst-case running time of this algorithm, showing that the running time analysis is almost tight. | A Branch-and-Reduce Algorithm for Finding a Minimum Independent
Dominating Set | 4,494 |
Formulate the problem as follows. Split a file into n pieces so that it can be restored without any m parts (1<=m<=n). Such problems are called problems secret sharing. There exists a set of methods for solving such problems, but they all require a fairly large number of calculations applied to the problem posed above. The proposed method does not require calculations, and requires only the operations of the division of the file into equal (nearly equal) parts and gluing them in a certain order in one or more files. | One method of storing information | 4,495 |
A major factor affecting the readability of a graph drawing is its resolution. In the graph drawing literature, the resolution of a drawing is either measured based on the angles formed by consecutive edges incident to a common node (angular resolution) or by the angles formed at edge crossings (crossing resolution). In this paper, we evaluate both by introducing the notion of "total resolution", that is, the minimum of the angular and crossing resolution. To the best of our knowledge, this is the first time where the problem of maximizing the total resolution of a drawing is studied. The main contribution of the paper consists of drawings of asymptotically optimal total resolution for complete graphs (circular drawings) and for complete bipartite graphs (2-layered drawings). In addition, we present and experimentally evaluate a force-directed based algorithm that constructs drawings of large total resolution. | Maximizing the Total Resolution of Graphs | 4,496 |
Wireless Communication Networks based on Frequency Division Multiplexing (FDM in short) plays an important role in the field of communications, in which each request can be satisfied by assigning a frequency. To avoid interference, each assigned frequency must be different to the neighboring assigned frequencies. Since frequency is a scarce resource, the main problem in wireless networks is how to fully utilize the given bandwidth of frequencies. In this paper, we consider the online call control problem. Given a fixed bandwidth of frequencies and a sequence of communication requests arrive over time, each request must be either satisfied immediately after its arrival by assigning an available frequency, or rejected. The objective of call control problem is to maximize the number of accepted requests. We study the asymptotic performance of this problem, i.e., the number of requests in the sequence and the bandwidth of frequencies are very large. In this paper, we give a 7/3-competitive algorithm for call control problem in cellular network, improving the previous 2.5-competitive result. Moreover, we investigate the triangle-free cellular network, propose a 9/4-competitive algorithm and prove that the lower bound of competitive ratio is at least 5/3. | Deterministic Online Call Control in Cellular Networks and Triangle-Free
Cellular Networks | 4,497 |
We introduce a problem that is a common generalization of the uncapacitated facility location and minimum latency (ML) problems, where facilities need to be opened to serve clients and also need to be sequentially activated before they can provide service. Formally, we are given a set \F of n facilities with facility-opening costs {f_i}, a set of m clients, and connection costs {c_{ij}} specifying the cost of assigning a client j to a facility i, a root node r denoting the depot, and a time metric d on \F\cup{r}. Our goal is to open a subset F of facilities, find a path P starting at r and spanning F to activate the open facilities, and connect each client j to a facility \phi(j)\in F, so as to minimize \sum_{i\in F}f_i +\sum_{clients j}(c_{\phi(j),j}+t_j), where t_j is the time taken to reach \phi(j) along path P. We call this the minimum latency uncapacitated facility location (MLUFL) problem. Our main result is an O(\log n\max{\log n,\log m})-approximation for MLUFL. We also show that any improvement in this approximation guarantee, implies an improvement in the (current-best) approximation factor for group Steiner tree. We obtain constant approximations for two natural special cases of the problem: (a) related MLUFL (metric connection costs that are a scalar multiple of the time metric); (b) metric uniform MLUFL (metric connection costs, unform time-metric). Our LP-based methods are versatile and easily adapted to yield approximation guarantees for MLUFL in various more general settings, such as (i) when the latency-cost of a client is a function of the delay faced by the facility to which it is connected; and (ii) the k-route version, where k vehicles are routed in parallel to activate the open facilities. Our LP-based understanding of MLUFL also offers some LP-based insights into ML, which we believe is a promising direction for obtaining improvements for ML. | Facility Location with Client Latencies: Linear-Programming based
Techniques for Minimum-Latency Problems | 4,498 |
We consider an extension of the {\em popular matching} problem in this paper. The input to the popular matching problem is a bipartite graph G = (A U B,E), where A is a set of people, B is a set of items, and each person a belonging to A ranks a subset of items in an order of preference, with ties allowed. The popular matching problem seeks to compute a matching M* between people and items such that there is no matching M where more people are happier with M than with M*. Such a matching M* is called a popular matching. However, there are simple instances where no popular matching exists. Here we consider the following natural extension to the above problem: associated with each item b belonging to B is a non-negative price cost(b), that is, for any item b, new copies of b can be added to the input graph by paying an amount of cost(b) per copy. When G does not admit a popular matching, the problem is to "augment" G at minimum cost such that the new graph admits a popular matching. We show that this problem is NP-hard; in fact, it is NP-hard to approximate it within a factor of sqrt{n1}/2, where n1 is the number of people. This problem has a simple polynomial time algorithm when each person has a preference list of length at most 2. However, if we consider the problem of "constructing" a graph at minimum cost that admits a popular matching that matches all people, then even with preference lists of length 2, the problem becomes NP-hard. On the other hand, when the number of copies of each item is "fixed", we show that the problem of computing a minimum cost popular matching or deciding that no popular matching exists can be solved in O(mn1) time, where m is the number of edges. | Popularity at Minimum Cost | 4,499 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.