text stringlengths 17 3.36M | source stringlengths 3 333 | __index_level_0__ int64 0 518k |
|---|---|---|
This paper focuses on two main issues; first one is the impact of combination of multi-sensor images on the supervised learning classification accuracy using segment Fusion (SF). The second issue attempts to undertake the study of supervised machine learning classification technique of remote sensing images by using four classifiers like Parallelepiped (Pp), Mahalanobis Distance (MD), Maximum-Likelihood (ML) and Euclidean Distance(ED) classifiers, and their accuracies have been evaluated on their respected classification to choose the best technique for classification of remote sensing images. QuickBird multispectral data (MS) and panchromatic data (PAN) have been used in this study to demonstrate the enhancement and accuracy assessment of fused image over the original images using ALwassaiProcess software. According to experimental result of this study, is that the test results indicate the supervised classification results of fusion image, which generated better than the MS did. As well as the result with Euclidean classifier is robust and provides better results than the other classifiers do, despite of the popular belief that the maximum-likelihood classifier is the most accurate classifier. | Influences Combination of Multi-Sensor Images on Classification Accuracy | 3,900 |
There are a lot of intensive researches on handwritten character recognition (HCR) for almost past four decades. The research has been done on some of popular scripts such as Roman, Arabic, Chinese and Indian. In this paper we present a review on HCR work on the four popular scripts. We have summarized most of the published paper from 2005 to recent and also analyzed the various methods in creating a robust HCR system. We also added some future direction of research on HCR. | A review on handwritten character and numeral recognition for Roman,
Arabic, Chinese and Indian scripts | 3,901 |
We propose a state of the art method for intelligent object recognition and video surveillance based on human visual attention. Bottom up and top down attention are applied respectively in the process of acquiring interested object(saliency map) and object recognition. The revision of 4 channel PFT method is proposed for bottom up attention and enhances the speed and accuracy. Inhibit of return (IOR) is applied in judging the sequence of saliency object pop out. Euclidean distance of color distribution, object center coordinates and speed are considered in judging whether the target is match and suspicious. The extensive tests on videos and images show that our method in video analysis has high accuracy and fast speed compared with traditional method. The method can be applied into many fields such as video surveillance and security. | Suspicious Object Recognition Method in Video Stream Based on Visual
Attention | 3,902 |
Here we discuss the application of an edge detection filter, the Sobel filter of GIMP, to the recently discovered motion of some sand dunes on Mars. The filter allows a good comparison of an image HiRISE of 2007 and an image of 1999 recorded by the Mars Global Surveyor of the dunes in the Nili Patera caldera, measuring therefore the motion of the dunes on a longer period of time than that previously investigated. | Edge-detection applied to moving sand dunes on Mars | 3,903 |
An approach for effective implementation of greedy selection methodologies, to approximate an image partitioned into blocks, is proposed. The method is specially designed for approximating partitions on a transformed image. It evolves by selecting, at each iteration step, i) the elements for approximating each of the blocks partitioning the image and ii) the hierarchized sequence in which the blocks are approximated to reach the required global condition on sparsity. | Hierarchized block wise image approximation by greedy pursuit strategies | 3,904 |
Multiphase active contour based models are useful in identifying multiple regions with different characteristics such as the mean values of regions. This is relevant in brain magnetic resonance images (MRIs), allowing the differentiation of white matter against gray matter. We consider a well defined globally convex formulation of Vese and Chan multiphase active contour model for segmenting brain MRI images. A well-established theory and an efficient dual minimization scheme are thoroughly described which guarantees optimal solutions and provides stable segmentations. Moreover, under the dual minimization implementation our model perfectly describes disjoint regions by avoiding local minima solutions. Experimental results indicate that the proposed approach provides better accuracy than other related multiphase active contour algorithms even under severe noise, intensity inhomogeneities, and partial volume effects. | Brain MRI Segmentation with Fast and Globally Convex Multiphase Active
Contours | 3,905 |
This paper deals with the recognition and matching of text in both cartographic maps and ancient documents. The purpose of this work is to find similar text regions based on statistical and global features. A phase of normalization is done first, in object to well categorize the same quantity of information. A phase of wordspotting is done next by combining local and global features. We make different experiments by combining the different techniques of extracting features in order to obtain better results in recognition phase. We applied fontspotting on both ancient documents and cartographic ones. We also applied the wordspotting in which we adopted a new technique which tries to compare the images of character and not the entire images words. We present the precision and recall values obtained with three methods for the new method of wordspotting applied on characters only. | Text recognition in both ancient and cartographic documents | 3,906 |
The analysis of historical documents is still a topical issue given the importance of information that can be extracted and also the importance given by the institutions to preserve their heritage. The main idea in order to characterize the content of the images of ancient documents after attempting to clean the image is segmented blocks texts from the same image and tries to find similar blocks in either the same image or the entire image database. Most approaches of offline handwriting recognition proceed by segmenting words into smaller pieces (usually characters) which are recognized separately. Recognition of a word then requires the recognition of all characters (OCR) that compose it. Our work focuses mainly on the characterization of classes in images of old documents. We use Som toolbox for finding classes in documents. We applied also fractal dimensions and points of interest to categorize and match ancient documents. | Categorizing ancient documents | 3,907 |
Characterizing noisy or ancient documents is a challenging problem up to now. Many techniques have been done in order to effectuate feature extraction and image indexation for such documents. Global approaches are in general less robust and exact than local approaches. That's why, we propose in this paper, a hybrid system based on global approach(fractal dimension), and a local one based on SIFT descriptor. The Scale Invariant Feature Transform seems to do well with our application since it's rotation invariant and relatively robust to changing illumination.In the first step the calculation of fractal dimension is applied to images in order to eliminate images which have distant features than image request characteristics. Next, the SIFT is applied to show which images match well the request. However the average matching time using the hybrid approach is better than "fractal dimension" and "SIFT descriptor" if they are used alone. | A proposition of a robust system for historical document images
indexation | 3,908 |
In this paper we propose the Graduated NonConvexity and Graduated Concavity Procedure (GNCGCP) as a general optimization framework to approximately solve the combinatorial optimization problems on the set of partial permutation matrices. GNCGCP comprises two sub-procedures, graduated nonconvexity (GNC) which realizes a convex relaxation and graduated concavity (GC) which realizes a concave relaxation. It is proved that GNCGCP realizes exactly a type of convex-concave relaxation procedure (CCRP), but with a much simpler formulation without needing convex or concave relaxation in an explicit way. Actually, GNCGCP involves only the gradient of the objective function and is therefore very easy to use in practical applications. Two typical NP-hard problems, (sub)graph matching and quadratic assignment problem (QAP), are employed to demonstrate its simplicity and state-of-the-art performance. | GNCGCP - Graduated NonConvexity and Graduated Concavity Procedure | 3,909 |
In this paper we present a practical approach for generating an occlusion-free textured 3D map of urban facades by the synergistic use of terrestrial images, 3D point clouds and area-based information. Particularly in dense urban environments, the high presence of urban objects in front of the facades causes significant difficulties for several stages in computational building modeling. Major challenges lie on the one hand in extracting complete 3D facade quadrilateral delimitations and on the other hand in generating occlusion-free facade textures. For these reasons, we describe a straightforward approach for completing and recovering facade geometry and textures by exploiting the data complementarity of terrestrial multi-source imagery and area-based information. | A Synergistic Approach for Recovering Occlusion-Free Textured 3D Maps of
Urban Facades from Heterogeneous Cartographic Data | 3,910 |
With the rapid development of digital imaging and communication technologies, image set based face recognition (ISFR) is becoming increasingly important. One key issue of ISFR is how to effectively and efficiently represent the query face image set by using the gallery face image sets. The set-to-set distance based methods ignore the relationship between gallery sets, while representing the query set images individually over the gallery sets ignores the correlation between query set images. In this paper, we propose a novel image set based collaborative representation and classification method for ISFR. By modeling the query set as a convex or regularized hull, we represent this hull collaboratively over all the gallery sets. With the resolved representation coefficients, the distance between the query set and each gallery set can then be calculated for classification. The proposed model naturally and effectively extends the image based collaborative representation to an image set based one, and our extensive experiments on benchmark ISFR databases show the superiority of the proposed method to state-of-the-art ISFR methods under different set sizes in terms of both recognition rate and efficiency. | Image Set based Collaborative Representation for Face Recognition | 3,911 |
In this work, a new constrained hybrid variational deblurring model is developed by combining the non-convex first- and second-order total variation regularizers. Moreover, a box constraint is imposed on the proposed model to guarantee high deblurring performance. The developed constrained hybrid variational model could achieve a good balance between preserving image details and alleviating ringing artifacts. In what follows, we present the corresponding numerical solution by employing an iteratively reweighted algorithm based on alternating direction method of multipliers. The experimental results demonstrate the superior performance of the proposed method in terms of quantitative and qualitative image quality assessments. | A Robust Alternating Direction Method for Constrained Hybrid Variational
Deblurring Model | 3,912 |
Blind image quality assessment (BIQA) aims to predict perceptual image quality scores without access to reference images. State-of-the-art BIQA methods typically require subjects to score a large number of images to train a robust model. However, subjective quality scores are imprecise, biased, and inconsistent, and it is challenging to obtain a large scale database, or to extend existing databases, because of the inconvenience of collecting images, training the subjects, conducting subjective experiments, and realigning human quality evaluations. To combat these limitations, this paper explores and exploits preference image pairs (PIPs) such as "the quality of image $I_a$ is better than that of image $I_b$" for training a robust BIQA model. The preference label, representing the relative quality of two images, is generally precise and consistent, and is not sensitive to image content, distortion type, or subject identity; such PIPs can be generated at very low cost. The proposed BIQA method is one of learning to rank. We first formulate the problem of learning the mapping from the image features to the preference label as one of classification. In particular, we investigate the utilization of a multiple kernel learning algorithm based on group lasso (MKLGL) to provide a solution. A simple but effective strategy to estimate perceptual image quality scores is then presented. Experiments show that the proposed BIQA method is highly effective and achieves comparable performance to state-of-the-art BIQA algorithms. Moreover, the proposed method can be easily extended to new distortion categories. | Learning to Rank for Blind Image Quality Assessment | 3,913 |
Many efforts have been devoted to develop alternative methods to traditional vector quantization in image domain such as sparse coding and soft-assignment. These approaches can be split into a dictionary learning phase and a feature encoding phase which are often closely connected. In this paper, we investigate the effects of these phases by separating them for video-based action classification. We compare several dictionary learning methods and feature encoding schemes through extensive experiments on KTH and HMDB51 datasets. Experimental results indicate that sparse coding performs consistently better than the other encoding methods in large complex dataset (i.e., HMDB51), and it is robust to different dictionaries. For small simple dataset (i.e., KTH) with less variation, however, all the encoding strategies perform competitively. In addition, we note that the strength of sophisticated encoding approaches comes not from their corresponding dictionaries but the encoding mechanisms, and we can just use randomly selected exemplars as dictionaries for video-based action classification. | A Study on Unsupervised Dictionary Learning and Feature Encoding for
Action Classification | 3,914 |
This paper describes an efficient approach for human face recognition based on blood perfusion data from infra-red face images. Blood perfusion data are characterized by the regional blood flow in human tissue and therefore do not depend entirely on surrounding temperature. These data bear a great potential for deriving discriminating facial thermogram for better classification and recognition of face images in comparison to optical image data. Blood perfusion data are related to distribution of blood vessels under the face skin. A distribution of blood vessels are unique for each person and as a set of extracted minutiae points from a blood perfusion data of a human face should be unique for that face. There may be several such minutiae point sets for a single face but all of these correspond to that particular face only. Entire face image is partitioned into equal blocks and the total number of minutiae points from each block is computed to construct final vector. Therefore, the size of the feature vectors is found to be same as total number of blocks considered. For classification, a five layer feed-forward backpropagation neural network has been used. A number of experiments were conducted to evaluate the performance of the proposed face recognition system with varying block sizes. Experiments have been performed on the database created at our own laboratory. The maximum success of 91.47% recognition has been achieved with block size 8X8. | Minutiae Based Thermal Face Recognition using Blood Perfusion Data | 3,915 |
In this paper an efficient approach for human face recognition based on the use of minutiae points in thermal face image is proposed. The thermogram of human face is captured by thermal infra-red camera. Image processing methods are used to pre-process the captured thermogram, from which different physiological features based on blood perfusion data are extracted. Blood perfusion data are related to distribution of blood vessels under the face skin. In the present work, three different methods have been used to get the blood perfusion image, namely bit-plane slicing and medial axis transform, morphological erosion and medial axis transform, sobel edge operators. Distribution of blood vessels is unique for each person and a set of extracted minutiae points from a blood perfusion data of a human face should be unique for that face. Two different methods are discussed for extracting minutiae points from blood perfusion data. For extraction of features entire face image is partitioned into equal size blocks and the total number of minutiae points from each block is computed to construct final feature vector. Therefore, the size of the feature vectors is found to be same as total number of blocks considered. A five layer feed-forward back propagation neural network is used as the classification tool. A number of experiments were conducted to evaluate the performance of the proposed face recognition methodologies with varying block size on the database created at our own laboratory. It has been found that the first method supercedes the other two producing an accuracy of 97.62% with block size 16X16 for bit-plane 4. | Automated Thermal Face recognition based on Minutiae Extraction | 3,916 |
Thermal infra-red (IR) images focus on changes of temperature distribution on facial muscles and blood vessels. These temperature changes can be regarded as texture features of images. A comparative study of face recognition methods working in thermal spectrum is carried out in this paper. In these study two local-matching methods based on Haar wavelet transform and Local Binary Pattern (LBP) are analyzed. Wavelet transform is a good tool to analyze multi-scale, multi-direction changes of texture. Local binary patterns (LBP) are a type of feature used for classification in computer vision. Firstly, human thermal IR face image is preprocessed and cropped the face region only from the entire image. Secondly, two different approaches are used to extract the features from the cropped face region. In the first approach, the training images and the test images are processed with Haar wavelet transform and the LL band and the average of LH/HL/HH bands sub-images are created for each face image. Then a total confidence matrix is formed for each face image by taking a weighted sum of the corresponding pixel values of the LL band and average band. For LBP feature extraction, each of the face images in training and test datasets is divided into 161 numbers of sub images, each of size 8X8 pixels. For each such sub images, LBP features are extracted which are concatenated in row wise manner. PCA is performed separately on the individual feature set for dimensionality reeducation. Finally two different classifiers are used to classify face images. One such classifier multi-layer feed forward neural network and another classifier is minimum distance classifier. The Experiments have been performed on the database created at our own laboratory and Terravic Facial IR Database. | A Comparative Study of Human thermal face recognition based on Haar
wavelet transform (HWT) and Local Binary Pattern (LBP) | 3,917 |
The goal of object detection is to find objects in an image. An object detector accepts an image and produces a list of locations as $(x,y)$ pairs. Here we introduce a new concept: {\bf location-based boosting}. Location-based boosting differs from previous boosting algorithms because it optimizes a new spatial loss function to combine object detectors, each of which may have marginal performance, into a single, more accurate object detector. A structured representation of object locations as a list of $(x,y)$ pairs is a more natural domain for object detection than the spatially unstructured representation produced by classifiers. Furthermore, this formulation allows us to take advantage of the intuition that large areas of the background are uninteresting and it is not worth expending computational effort on them. This results in a more scalable algorithm because it does not need to take measures to prevent the background data from swamping the foreground data such as subsampling or applying an ad-hoc weighting to the pixels. We first present the theory of location-based boosting, and then motivate it with empirical results on a challenging data set. | Boosting in Location Space | 3,918 |
In this paper, a thermal infra red face recognition system for human identification and verification using blood perfusion data and back propagation feed forward neural network is proposed. The system consists of three steps. At the very first step face region is cropped from the colour 24-bit input images. Secondly face features are extracted from the croped region, which will be taken as the input of the back propagation feed forward neural network in the third step and classification and recognition is carried out. The proposed approaches are tested on a number of human thermal infra red face images created at our own laboratory. Experimental results reveal the higher degree performance | Minutiae Based Thermal Human Face Recognition using Label Connected
Component Algorithm | 3,919 |
Thermal infrared (IR) images represent the heat patterns emitted from hot object and they do not consider the energies reflected from an object. Objects living or non-living emit different amounts of IR energy according to their body temperature and characteristics. Humans are homoeothermic and hence capable of maintaining constant temperature under different surrounding temperature. Face recognition from thermal (IR) images should focus on changes of temperature on facial blood vessels. These temperature changes can be regarded as texture features of images and wavelet transform is a very good tool to analyze multi-scale and multi-directional texture. Wavelet transform is also used for image dimensionality reduction, by removing redundancies and preserving original features of the image. The sizes of the facial images are normally large. So, the wavelet transform is used before image similarity is measured. Therefore this paper describes an efficient approach of human face recognition based on wavelet transform from thermal IR images. The system consists of three steps. At the very first step, human thermal IR face image is preprocessed and the face region is only cropped from the entire image. Secondly, Haar wavelet is used to extract low frequency band from the cropped face region. Lastly, the image classification between the training images and the test images is done, which is based on low-frequency components. The proposed approach is tested on a number of human thermal infrared face images created at our own laboratory and Terravic Facial IR Database. Experimental results indicated that the thermal infra red face images can be recognized by the proposed system effectively. The maximum success of 95% recognition has been achieved. | Thermal Human face recognition based on Haar wavelet transform and
series matching technique | 3,920 |
This article is a sketch of ideas that were once intended to appear in the author's famous series, "The Art of Computer Programming". He generalizes the notion of a context-free language from a set to a multiset of words over an alphabet. The idea is to keep track of the number of ways to parse a string. For example, "fruit flies like a banana" can famously be parsed in two ways; analogous examples in the setting of programming languages may yet be important in the future. The treatment is informal but essentially rigorous. | Context-free multilanguages | 3,921 |
A perturbation technique can be used to simplify and sharpen A. C. Yao's theorems about the behavior of shellsort with increments $(h,g,1)$. In particular, when $h=\Theta(n^{7/15})$ and $g=\Theta(h^{1/5})$, the average running time is $O(n^{23/15})$. The proof involves interesting properties of the inversions in random permutations that have been $h$-sorted and $g$-sorted. | Shellsort with three increments | 3,922 |
Mallows and Riordan showed in 1968 that labeled trees with a small number of inversions are related to labeled graphs that are connected and sparse. Wright enumerated sparse connected graphs in 1977, and Kreweras related the inversions of trees to the so-called ``parking problem'' in 1980. A~combination of these three results leads to a surprisingly simple analysis of the behavior of hashing by linear probing, including higher moments of the cost of successful search. | Linear probing and graphs | 3,923 |
We significantly improve known time bounds for solving the minimum cut problem on undirected graphs. We use a ``semi-duality'' between minimum cuts and maximum spanning tree packings combined with our previously developed random sampling techniques. We give a randomized algorithm that finds a minimum cut in an m-edge, n-vertex graph with high probability in O(m log^3 n) time. We also give a simpler randomized algorithm that finds all minimum cuts with high probability in O(n^2 log n) time. This variant has an optimal RNC parallelization. Both variants improve on the previous best time bound of O(n^2 log^3 n). Other applications of the tree-packing approach are new, nearly tight bounds on the number of near minimum cuts a graph may have and a new data structure for representing them in a space-efficient manner. | Minimum Cuts in Near-Linear Time | 3,924 |
We consider the problem of coloring k-colorable graphs with the fewest possible colors. We present a randomized polynomial time algorithm that colors a 3-colorable graph on $n$ vertices with min O(Delta^{1/3} log^{1/2} Delta log n), O(n^{1/4} log^{1/2} n) colors where Delta is the maximum degree of any vertex. Besides giving the best known approximation ratio in terms of n, this marks the first non-trivial approximation result as a function of the maximum degree Delta. This result can be generalized to k-colorable graphs to obtain a coloring using min O(Delta^{1-2/k} log^{1/2} Delta log n), O(n^{1-3/(k+1)} log^{1/2} n) colors. Our results are inspired by the recent work of Goemans and Williamson who used an algorithm for semidefinite optimization problems, which generalize linear programs, to obtain improved approximations for the MAX CUT and MAX 2-SAT problems. An intriguing outcome of our work is a duality relationship established between the value of the optimum solution to our semidefinite program and the Lovasz theta-function. We show lower bounds on the gap between the optimum solution of our semidefinite program and the actual chromatic number; by duality this also demonstrates interesting new facts about the theta-function. | Approximate Graph Coloring by Semidefinite Programming | 3,925 |
We examine possibility to design an efficient solving algorithm for problems of the class \np. It is introduced a classification of \np problems by the property that a partial solution of size $k$ can be extended into a partial solution of size $k+1$ in polynomial time. It is defined an unique class problems to be worth to search an efficient solving algorithm. The problems, which are outside of this class, are inherently exponential. We show that the Hamiltonian cycle problem is inherently exponential. | A class of problems of NP to be worth to search an efficient solving
algorithm | 3,926 |
Tomography is the area of reconstructing objects from projections. Here we wish to reconstruct a set of cells in a two dimensional grid, given the number of cells in every row and column. The set is required to be an hv-convex polyomino, that is all its cells must be connected and the cells in every row and column must be consecutive. A simple, polynomial algorithm for reconstructing hv-convex polyominoes is provided, which is several orders of magnitudes faster than the best previously known algorithm from Barcucci et al. In addition, the problem of reconstructing a special class of centered hv-convex polyominoes is addressed. (An object is centered if it contains a row whose length equals the total width of the object). It is shown that in this case the reconstruction problem can be solved in linear time. | Reconstructing hv-Convex Polyominoes from Orthogonal Projections | 3,927 |
We solve the subgraph isomorphism problem in planar graphs in linear time, for any pattern of constant size. Our results are based on a technique of partitioning the planar graph into pieces of small tree-width, and applying dynamic programming within each piece. The same methods can be used to solve other planar graph problems including connectivity, diameter, girth, induced subgraph isomorphism, and shortest paths. | Subgraph Isomorphism in Planar Graphs and Related Problems | 3,928 |
We develop data structures for dynamic closest pair problems with arbitrary distance functions, that do not necessarily come from any geometric structure on the objects. Based on a technique previously used by the author for Euclidean closest pairs, we show how to insert and delete objects from an n-object set, maintaining the closest pair, in O(n log^2 n) time per update and O(n) space. With quadratic space, we can instead use a quadtree-like structure to achieve an optimal time bound, O(n) per update. We apply these data structures to hierarchical clustering, greedy matching, and TSP heuristics, and discuss other potential applications in machine learning, Groebner bases, and local improvement algorithms for partition and placement problems. Experiments show our new methods to be faster in practice than previously used heuristics. | Fast Hierarchical Clustering and Other Applications of Dynamic Closest
Pairs | 3,929 |
We discuss some aspects of approximating functions on high-dimensional data sets with additive functions or ANOVA decompositions, that is, sums of functions depending on fewer variables each. It is seen that under appropriate smoothness conditions, the errors of the ANOVA decompositions are of order $O(n^{m/2})$ for approximations using sums of functions of up to $m$ variables under some mild restrictions on the (possibly dependent) predictor variables. Several simulated examples illustrate this behaviour. | Additive models in high dimensions | 3,930 |
We examine the Maximum Independent Set Problem in an undirected graph. The main result is that this problem can be considered as the solving the same problem in a subclass of the weighted normal twin-orthogonal graphs. The problem is formulated which is dual to the problem above. It is shown that, for trivial twin-orthogonal graphs, any of its maximal independent set is also maximum one. | About the finding of independent vertices of a graph | 3,931 |
We consider worst case time bounds for NP-complete problems including 3-SAT, 3-coloring, 3-edge-coloring, and 3-list-coloring. Our algorithms are based on a constraint satisfaction (CSP) formulation of these problems. 3-SAT is equivalent to (2,3)-CSP while the other problems above are special cases of (3,2)-CSP; there is also a natural duality transformation from (a,b)-CSP to (b,a)-CSP. We give a fast algorithm for (3,2)-CSP and use it to improve the time bounds for solving the other problems listed above. Our techniques involve a mixture of Davis-Putnam-style backtracking with more sophisticated matching and network flow based ideas. | 3-Coloring in Time O(1.3289^n) | 3,932 |
We determine the asymptotical satisfiability probability of a random at-most-k-Horn formula, via a probabilistic analysis of a simple version, called PUR, of positive unit resolution. We show that for k=k(n)->oo the problem can be ``reduced'' to the case k(n)=n, that was solved in cs.DS/9912001. On the other hand, in the case k= a constant the behavior of PUR is modeled by a simple queuing chain, leading to a closed-form solution when k=2. Our analysis predicts an ``easy-hard-easy'' pattern in this latter case. Under a rescaled parameter, the graphs of satisfaction probability corresponding to finite values of k converge to the one for the uniform case, a ``dimension-dependent behavior'' similar to the one found experimentally by Kirkpatrick and Selman (Science'94) for k-SAT. The phenomenon is qualitatively explained by a threshold property for the number of iterations of PUR makes on random satisfiable Horn formulas. | Dimension-Dependent behavior in the satisfability of random k-Horn
formulae | 3,933 |
In this paper we present a new data structure for double ended priority queue, called min-max fine heap, which combines the techniques used in fine heap and traditional min-max heap. The standard operations on this proposed structure are also presented, and their analysis indicates that the new structure outperforms the traditional one. | Min-Max Fine Heaps | 3,934 |
We present two new algorithms for solving the {\em All Pairs Shortest Paths} (APSP) problem for weighted directed graphs. Both algorithms use fast matrix multiplication algorithms. The first algorithm solves the APSP problem for weighted directed graphs in which the edge weights are integers of small absolute value in $\Ot(n^{2+\mu})$ time, where $\mu$ satisfies the equation $\omega(1,\mu,1)=1+2\mu$ and $\omega(1,\mu,1)$ is the exponent of the multiplication of an $n\times n^\mu$ matrix by an $n^\mu \times n$ matrix. Currently, the best available bounds on $\omega(1,\mu,1)$, obtained by Coppersmith, imply that $\mu<0.575$. The running time of our algorithm is therefore $O(n^{2.575})$. Our algorithm improves on the $\Ot(n^{(3+\omega)/2})$ time algorithm, where $\omega=\omega(1,1,1)<2.376$ is the usual exponent of matrix multiplication, obtained by Alon, Galil and Margalit, whose running time is only known to be $O(n^{2.688})$. The second algorithm solves the APSP problem {\em almost} exactly for directed graphs with {\em arbitrary} non-negative real weights. The algorithm runs in $\Ot((n^\omega/\eps)\log (W/\eps))$ time, where $\eps>0$ is an error parameter and W is the largest edge weight in the graph, after the edge weights are scaled so that the smallest non-zero edge weight in the graph is 1. It returns estimates of all the distances in the graph with a stretch of at most $1+\eps$. Corresponding paths can also be found efficiently. | All Pairs Shortest Paths using Bridging Sets and Rectangular Matrix
Multiplication | 3,935 |
We consider worst case time bounds for NP-complete problems including 3-SAT, 3-coloring, 3-edge-coloring, and 3-list-coloring. Our algorithms are based on a constraint satisfaction (CSP) formulation of these problems; 3-SAT is equivalent to (2,3)-CSP while the other problems above are special cases of (3,2)-CSP. We give a fast algorithm for (3,2)-CSP and use it to improve the time bounds for solving the other problems listed above. Our techniques involve a mixture of Davis-Putnam-style backtracking with more sophisticated matching and network flow based ideas. | Improved Algorithms for 3-Coloring, 3-Edge-Coloring, and Constraint
Satisfaction | 3,936 |
The author presents two tricks to accelerate depth-first search algorithms for a class of combinatorial puzzle problems, such as tiling a tray by a fixed set of polyominoes. The first trick is to implement each assumption of the search with reversible local operations on doubly linked lists. By this trick, every step of the search affects the data incrementally. The second trick is to add a ghost square that represents the identity of each polyomino. Thus puts the rule that each polyomino be used once on the same footing as the rule that each square be covered once. The coding simplifies to a more abstract form which is equivalent to 0-1 integer programming. More significantly for the total computation time, the search can naturally switch between placing a fixed polyomino or covering a fixed square at different stages, according to a combined heuristic. Finally the author reports excellent performance for his algorithm for some familiar puzzles. These include tiling a hexagon by 19 hexiamonds and the N queens problem for N up to 18. | Dancing links | 3,937 |
In this paper we present a random shuffling scheme to apply with adaptive sorting algorithms. Adaptive sorting algorithms utilize the presortedness present in a given sequence. We have probabilistically increased the amount of presortedness present in a sequence by using a random shuffling technique that requires little computation. Theoretical analysis suggests that the proposed scheme can improve the performance of adaptive sorting. Experimental results show that it significantly reduces the amount of disorder present in a given sequence and improves the execution time of adaptive sorting algorithm as well. | Random Shuffling to Reduce Disorder in Adaptive Sorting Scheme | 3,938 |
We introduce the smoothed analysis of algorithms, which is a hybrid of the worst-case and average-case analysis of algorithms. In smoothed analysis, we measure the maximum over inputs of the expected performance of an algorithm under small random perturbations of that input. We measure this performance in terms of both the input size and the magnitude of the perturbations. We show that the simplex algorithm has polynomial smoothed complexity. | Smoothed Analysis of Algorithms: Why the Simplex Algorithm Usually Takes
Polynomial Time | 3,939 |
In many applications, it is necessary to determine the string similarity. Edit distance[WF74] approach is a classic method to determine Field Similarity. A well known dynamic programming algorithm [GUS97] is used to calculate edit distance with the time complexity O(nm). (for worst case, average case and even best case) Instead of continuing with improving the edit distance approach, [LL+99] adopted a brand new approach-token-based approach. Its new concept of token-base-retain the original semantic information, good time complex-O(nm) (for worst, average and best case) and good experimental performance make it a milestone paper in this area. Further study indicates that there is still room for improvement of its Field Similarity algorithm. Our paper is to introduce a package of substring-based new algorithms to determine Field Similarity. Combined together, our new algorithms not only achieve higher accuracy but also gain the time complexity O(knm) (k<0.75) for worst case, O(*n) where <6 for average case and O(1) for best case. Throughout the paper, we use the approach of comparative examples to show higher accuracy of our algorithms compared to the one proposed in [LL+99]. Theoretical analysis, concrete examples and experimental result show that our algorithms can significantly improve the accuracy and time complexity of the calculation of Field Similarity. [US97] D. Guseld. Algorithms on Strings, Trees and Sequences, in Computer Science and Computational Biology. [LL+99] Mong Li Lee, Cleansing data for mining and warehousing, In Proceedings of the 10th International Conference on Database and Expert Systems Applications (DEXA99), pages 751-760,August 1999. [WF74] R. Wagner and M. Fisher, The String to String Correction Problem, JACM 21 pages 168-173, 1974. | Faster Algorithm of String Comparison | 3,940 |
We study the problem of compressing massive tables within the partition-training paradigm introduced by Buchsbaum et al. [SODA'00], in which a table is partitioned by an off-line training procedure into disjoint intervals of columns, each of which is compressed separately by a standard, on-line compressor like gzip. We provide a new theory that unifies previous experimental observations on partitioning and heuristic observations on column permutation, all of which are used to improve compression rates. Based on the theory, we devise the first on-line training algorithms for table compression, which can be applied to individual files, not just continuously operating sources; and also a new, off-line training algorithm, based on a link to the asymmetric traveling salesman problem, which improves on prior work by rearranging columns prior to partitioning. We demonstrate these results experimentally. On various test files, the on-line algorithms provide 35-55% improvement over gzip with negligible slowdown; the off-line reordering provides up to 20% further improvement over partitioning alone. We also show that a variation of the table compression problem is MAX-SNP hard. | Improving Table Compression with Combinatorial Optimization | 3,941 |
We show that several versions of Floyd and Rivest's algorithm Select for finding the $k$th smallest of $n$ elements require at most $n+\min\{k,n-k\}+o(n)$ comparisons on average and with high probability. This rectifies the analysis of Floyd and Rivest, and extends it to the case of nondistinct elements. Our computational results confirm that Select may be the best algorithm in practice. | Randomized selection revisited | 3,942 |
Pattern-matching-based document-compression systems (e.g. for faxing) rely on finding a small set of patterns that can be used to represent all of the ink in the document. Finding an optimal set of patterns is NP-hard; previous compression schemes have resorted to heuristics. This paper describes an extension of the cross-entropy approach, used previously for measuring pattern similarity, to this problem. This approach reduces the problem to a k-medians problem, for which the paper gives a new algorithm with a provably good performance guarantee. In comparison to previous heuristics (First Fit, with and without generalized Lloyd's/k-means postprocessing steps), the new algorithm generates a better codebook, resulting in an overall improvement in compression performance of almost 17%. | A Codebook Generation Algorithm for Document Image Compression | 3,943 |
Mixed packing and covering problems are problems that can be formulated as linear programs using only non-negative coefficients. Examples include multicommodity network flow, the Held-Karp lower bound on TSP, fractional relaxations of set cover, bin-packing, knapsack, scheduling problems, minimum-weight triangulation, etc. This paper gives approximation algorithms for the general class of problems. The sequential algorithm is a simple greedy algorithm that can be implemented to find an epsilon-approximate solution in O(epsilon^-2 log m) linear-time iterations. The parallel algorithm does comparable work but finishes in polylogarithmic time. The results generalize previous work on pure packing and covering (the special case when the constraints are all "less-than" or all "greater-than") by Michael Luby and Noam Nisan (1993) and Naveen Garg and Jochen Konemann (1998). | Sequential and Parallel Algorithms for Mixed Packing and Covering | 3,944 |
We give a polynomial-time approximation scheme for the generalization of Huffman Coding in which codeword letters have non-uniform costs (as in Morse code, where the dash is twice as long as the dot). The algorithm computes a (1+epsilon)-approximate solution in time O(n + f(epsilon) log^3 n), where n is the input size. | Huffman Coding with Letter Costs: A Linear-Time Approximation Scheme | 3,945 |
Describes a near-linear-time algorithm for a variant of Huffman coding, in which the letters may have non-uniform lengths (as in Morse code), but with the restriction that each word to be encoded has equal probability. [See also ``Huffman Coding with Unequal Letter Costs'' (2002).] | Prefix Codes: Equiprobable Words, Unequal Letter Costs | 3,946 |
Falmagne recently introduced the concept of a medium, a combinatorial object encompassing hyperplane arrangements, topological orderings, acyclic orientations, and many other familiar structures. We find efficient solutions for several algorithmic problems on media: finding short reset sequences, shortest paths, testing whether a medium has a closed orientation, and listing the states of a medium given a black-box description. | Algorithms for Media | 3,947 |
We present algorithms that run in linear time on pointer machines for a collection of problems, each of which either directly or indirectly requires the evaluation of a function defined on paths in a tree. These problems previously had linear-time algorithms but only for random-access machines (RAMs); the best pointer-machine algorithms were super-linear by an inverse-Ackermann-function factor. Our algorithms are also simpler, in some cases substantially, than the previous linear-time RAM algorithms. Our improvements come primarily from three new ideas: a refined analysis of path compression that gives a linear bound if the compressions favor certain nodes, a pointer-based radix sort as a replacement for table-based methods, and a more careful partitioning of a tree into easily managed parts. Our algorithms compute nearest common ancestors off-line, verify and construct minimum spanning trees, do interval analysis on a flowgraph, find the dominators of a flowgraph, and build the component tree of a weighted tree. | Linear-Time Pointer-Machine Algorithms for Path-Evaluation Problems on
Trees and Graphs | 3,948 |
Dealing with the NP-complete Dominating Set problem on undirected graphs, we demonstrate the power of data reduction by preprocessing from a theoretical as well as a practical side. In particular, we prove that Dominating Set restricted to planar graphs has a so-called problem kernel of linear size, achieved by two simple and easy to implement reduction rules. Moreover, having implemented our reduction rules, first experiments indicate the impressive practical potential of these rules. Thus, this work seems to open up a new and prospective way how to cope with one of the most important problems in graph theory and combinatorial optimization. | Polynomial Time Data Reduction for Dominating Set | 3,949 |
We provide a data structure for maintaining an embedding of a graph on a surface (represented combinatorially by a permutation of edges around each vertex) and computing generators of the fundamental group of the surface, in amortized time O(log n + log g(log log g)^3) per update on a surface of genus g; we can also test orientability of the surface in the same time, and maintain the minimum and maximum spanning tree of the graph in time O(log n + log^4 g) per update. Our data structure allows edge insertion and deletion as well as the dual operations; these operations may implicitly change the genus of the embedding surface. We apply similar ideas to improve the constant factor in a separator theorem for low-genus graphs, and to find in linear time a tree-decomposition of low-genus low-diameter graphs. | Dynamic Generators of Topologically Embedded Graphs | 3,950 |
We study the problem of computing a preemptive schedule of equal-length jobs with given release times, deadlines and weights. Our goal is to maximize the weighted throughput, which is the total weight of completed jobs. In Graham's notation this problem is described as (1 | r_j;p_j=p;pmtn | sum w_j U_j). We provide an O(n^4)-time algorithm for this problem, improving the previous bound of O(n^{10}) by Baptiste. | Preemptive Scheduling of Equal-Length Jobs to Maximize Weighted
Throughput | 3,951 |
We introduce exponential search trees as a novel technique for converting static polynomial space search structures for ordered sets into fully-dynamic linear space data structures. This leads to an optimal bound of O(sqrt(log n/loglog n)) for searching and updating a dynamic set of n integer keys in linear space. Here searching an integer y means finding the maximum key in the set which is smaller than or equal to y. This problem is equivalent to the standard text book problem of maintaining an ordered set (see, e.g., Cormen, Leiserson, Rivest, and Stein: Introduction to Algorithms, 2nd ed., MIT Press, 2001). The best previous deterministic linear space bound was O(log n/loglog n) due Fredman and Willard from STOC 1990. No better deterministic search bound was known using polynomial space. We also get the following worst-case linear space trade-offs between the number n, the word length w, and the maximal key U < 2^w: O(min{loglog n+log n/log w, (loglog n)(loglog U)/(logloglog U)}). These trade-offs are, however, not likely to be optimal. Our results are generalized to finger searching and string searching, providing optimal results for both in terms of n. | Dynamic Ordered Sets with Exponential Search Trees | 3,952 |
In this paper we present a theoretical analysis of the deterministic on-line {\em Sum of Squares} algorithm ($SS$) for bin packing introduced and studied experimentally in \cite{CJK99}, along with several new variants. $SS$ is applicable to any instance of bin packing in which the bin capacity $B$ and item sizes $s(a)$ are integral (or can be scaled to be so), and runs in time $O(nB)$. It performs remarkably well from an average case point of view: For any discrete distribution in which the optimal expected waste is sublinear, $SS$ also has sublinear expected waste. For any discrete distribution where the optimal expected waste is bounded, $SS$ has expected waste at most $O(\log n)$. In addition, we discuss several interesting variants on $SS$, including a randomized $O(nB\log B)$-time on-line algorithm $SS^*$, based on $SS$, whose expected behavior is essentially optimal for all discrete distributions. Algorithm $SS^*$ also depends on a new linear-programming-based pseudopolynomial-time algorithm for solving the NP-hard problem of determining, given a discrete distribution $F$, just what is the growth rate for the optimal expected waste. This article is a greatly expanded version of the conference paper \cite{sumsq2000}. | On the Sum-of-Squares Algorithm for Bin Packing | 3,953 |
This paper shows that a simple algorithm produces the {\em all-prefixes-LCSs-graph} in $O(mn)$ time for two input sequences of size $m$ and $n$. Given any prefix $p$ of the first input sequence and any prefix $q$ of the second input sequence, all longest common subsequences (LCSs) of $p$ and $q$ can be generated in time proportional to the output size, once the all-prefixes-LCSs-graph has been constructed. The problem can be solved in the context of generating all the distinct character strings that represent an LCS or in the context of generating all ways of embedding an LCS in the two input strings. | Fast and Simple Computation of All Longest Common Subsequences | 3,954 |
The number of the non-shared edges of two phylogenies is a basic measure of the dissimilarity between the phylogenies. The non-shared edges are also the building block for approximating a more sophisticated metric called the nearest neighbor interchange (NNI) distance. In this paper, we give the first subquadratic-time algorithm for finding the non-shared edges, which are then used to speed up the existing approximating algorithm for the NNI distance from $O(n^2)$ time to $O(n \log n)$ time. Another popular distance metric for phylogenies is the subtree transfer (STT) distance. Previous work on computing the STT distance considered degree-3 trees only. We give an approximation algorithm for the STT distance for degree-$d$ trees with arbitrary $d$ and with generalized STT operations. | Improved Phylogeny Comparisons: Non-Shared Edges Nearest Neighbor
Interchanges, and Subtree Transfers | 3,955 |
We consider the problem of laying out a tree with fixed parent/child structure in hierarchical memory. The goal is to minimize the expected number of block transfers performed during a search along a root-to-leaf path, subject to a given probability distribution on the leaves. This problem was previously considered by Gil and Itai, who developed optimal but slow algorithms when the block-transfer size B is known. We present faster but approximate algorithms for the same problem; the fastest such algorithm runs in linear time and produces a solution that is within an additive constant of optimal. In addition, we show how to extend any approximately optimal algorithm to the cache-oblivious setting in which the block-transfer size is unknown to the algorithm. The query performance of the cache-oblivious layout is within a constant factor of the query performance of the optimal known-block-size layout. Computing the cache-oblivious layout requires only logarithmically many calls to the layout algorithm for known block size; in particular, the cache-oblivious layout can be computed in O(N lg N) time, where N is the number of nodes. Finally, we analyze two greedy strategies, and show that they have a performance ratio between Omega(lg B / lg lg B) and O(lg B) when compared to the optimal layout. | Efficient Tree Layout in a Multilevel Memory Hierarchy | 3,956 |
We suggest a variation of the Hellerstein--Koutsoupias--Papadimitriou indexability model for datasets equipped with a similarity measure, with the aim of better understanding the structure of indexing schemes for similarity-based search and the geometry of similarity workloads. This in particular provides a unified approach to a great variety of schemes used to index into metric spaces and facilitates their transfer to more general similarity measures such as quasi-metrics. We discuss links between performance of indexing schemes and high-dimensional geometry. The concepts and results are illustrated on a very large concrete dataset of peptide fragments equipped with a biologically significant similarity measure. | Indexing schemes for similarity search: an illustrated paradigm | 3,957 |
We consider geometric instances of the Maximum Weighted Matching Problem (MWMP) and the Maximum Traveling Salesman Problem (MTSP) with up to 3,000,000 vertices. Making use of a geometric duality relationship between MWMP, MTSP, and the Fermat-Weber-Problem (FWP), we develop a heuristic approach that yields in near-linear time solutions as well as upper bounds. Using various computational tools, we get solutions within considerably less than 1% of the optimum. An interesting feature of our approach is that, even though an FWP is hard to compute in theory and Edmonds' algorithm for maximum weighted matching yields a polynomial solution for the MWMP, the practical behavior is just the opposite, and we can solve the FWP with high accuracy in order to find a good heuristic solution for the MWMP. | Solving a "Hard" Problem to Approximate an "Easy" One: Heuristics for
Maximum Matchings and Maximum Traveling Salesman Problems | 3,958 |
We perform a smoothed analysis of the termination phase of an interior-point method. By combining this analysis with the smoothed analysis of Renegar's interior-point algorithm by Dunagan, Spielman and Teng, we show that the smoothed complexity of an interior-point algorithm for linear programming is $O (m^{3} \log (m/\sigma))$. In contrast, the best known bound on the worst-case complexity of linear programming is $O (m^{3} L)$, where $L$ could be as large as $m$. We include an introduction to smoothed analysis and a tutorial on proof techniques that have been useful in smoothed analyses. | Smoothed Analysis of Interior-Point Algorithms: Termination | 3,959 |
In this paper we propose a simple and efficient data structure yielding a perfect hashing of quite general arrays. The data structure is named phorma, which is an acronym for perfectly hashable order restricted multidimensional array. Keywords: Perfect hash function, Digraph, Implicit enumeration, Nijenhuis-Wilf combinatorial family. | PHORMA: Perfectly Hashable Order Restricted Multidimensional Arrays | 3,960 |
We show how to find a Hamiltonian cycle in a graph of degree at most three with n vertices, in time O(2^{n/3}) ~= 1.260^n and linear space. Our algorithm can find the minimum weight Hamiltonian cycle (traveling salesman problem), in the same time bound. We can also count or list all Hamiltonian cycles in a degree three graph in time O(2^{3n/8}) ~= 1.297^n. We also solve the traveling salesman problem in graphs of degree at most four, by randomized and deterministic algorithms with runtime O((27/4)^{n/3}) ~= 1.890^n and O((27/4+epsilon)^{n/3}) respectively. Our algorithms allow the input to specify a set of forced edges which must be part of any generated cycle. Our cycle listing algorithm shows that every degree three graph has O(2^{3n/8}) Hamiltonian cycles; we also exhibit a family of graphs with 2^{n/3} Hamiltonian cycles per graph. | The traveling salesman problem for cubic graphs | 3,961 |
We present the first explicit connection between quantum computation and lattice problems. Namely, we show a solution to the Unique Shortest Vector Problem (SVP) under the assumption that there exists an algorithm that solves the hidden subgroup problem on the dihedral group by coset sampling. Moreover, we solve the hidden subgroup problem on the dihedral group by using an average case subset sum routine. By combining the two results, we get a quantum reduction from $\Theta(n^{2.5})$-unique-SVP to the average case subset sum problem. | Quantum Computation and Lattice Problems | 3,962 |
We propose a very simple randomised data structure that stores an approximation from above of a lattice-valued function. Computing the function value requires a constant number of steps, and the error probability can be balanced with space usage, much like in Bloom filters. The structure is particularly well suited for functions that are bottom on most of their domain. We then show how to use our methods to store in a compact way the bad-character shift function for variants of the Boyer-Moore text search algorithms. As a result, we obtain practical implementations of these algorithms that can be used with large alphabets, such as Unicode collation elements, with a small setup time. The ideas described in this paper have been implemented as free software under the GNU General Public License within the MG4J project (http://mg4j.dsi.unimi.it/). | Compact Approximation of Lattice Functions with Applications to
Large-Alphabet Text Search | 3,963 |
We show how to support efficient back traversal in a unidirectional list, using small memory and with essentially no slowdown in forward steps. Using $O(\log n)$ memory for a list of size $n$, the $i$'th back-step from the farthest point reached so far takes $O(\log i)$ time in the worst case, while the overhead per forward step is at most $\epsilon$ for arbitrary small constant $\epsilon>0$. An arbitrary sequence of forward and back steps is allowed. A full trade-off between memory usage and time per back-step is presented: $k$ vs. $kn^{1/k}$ and vice versa. Our algorithms are based on a novel pebbling technique which moves pebbles on a virtual binary, or $t$-ary, tree that can only be traversed in a pre-order fashion. The compact data structures used by the pebbling algorithms, called list traversal synopses, extend to general directed graphs, and have other interesting applications, including memory efficient hash-chain implementation. Perhaps the most surprising application is in showing that for any program, arbitrary rollback steps can be efficiently supported with small overhead in memory, and marginal overhead in its ordinary execution. More concretely: Let $P$ be a program that runs for at most $T$ steps, using memory of size $M$. Then, at the cost of recording the input used by the program, and increasing the memory by a factor of $O(\log T)$ to $O(M \log T)$, the program $P$ can be extended to support an arbitrary sequence of forward execution and rollback steps: the $i$'th rollback step takes $O(\log i)$ time in the worst case, while forward steps take O(1) time in the worst case, and $1+\epsilon$ amortized time per step. | Efficient pebbling for list traversal synopses | 3,964 |
A maximum weighted matching for bipartite graphs $G=(A \cup B,E)$ can be found by using the algorithm of Edmonds and Karp with a Fibonacci Heap and a modified Dijkstra in $O(nm + n^2 \log{n})$ time where n is the number of nodes and m the number of edges. For the case that $|A|=|B|$ the number of edges is $n^2$ and therefore the complexity is $O(n^3)$. In this paper we want to present a simple heuristic method to reduce the number of edges of complete bipartite graphs $G=(A \cup B,E)$ with $|A|=|B|$ such that $m = n\log{n}$ and therefore the complexity of such that $m = n\log{n}$ and therefore the complexity of $O(n^2 \log{n})$. The weights of all edges in G must be uniformly distributed in [0,1]. | Heuristic to reduce the complexity of complete bipartite graphs to
accelerate the search for maximum weighted matchings with small error | 3,965 |
The Lovasz Local Lemma due to Erdos and Lovasz is a powerful tool in proving the existence of rare events. We present an extension of this lemma, which works well when the event to be shown to exist is a conjunction of individual events, each of which asserts that a random variable does not deviate much from its mean. As applications, we consider two classes of NP-hard integer programs: minimax and covering integer programs. A key technique, randomized rounding of linear relaxations, was developed by Raghavan and Thompson to derive good approximation algorithms for such problems. We use our extension of the Local Lemma to prove that randomized rounding produces, with non-zero probability, much better feasible solutions than known before, if the constraint matrices of these integer programs are column-sparse (e.g., routing using short paths, problems on hypergraphs with small dimension/degree). This complements certain well-known results from discrepancy theory. We also generalize the method of pessimistic estimators due to Raghavan, to obtain constructive (algorithmic) versions of our results for covering integer programs. | An Extension of the Lovasz Local Lemma, and its Applications to Integer
Programming | 3,966 |
In this paper we present a discrete data structure for reservations of limited resources. A reservation is defined as a tuple consisting of the time interval of when the resource should be reserved, $I_R$, and the amount of the resource that is reserved, $B_R$, formally $R=\{I_R,B_R\}$. The data structure is similar to a segment tree. The maximum spanning interval of the data structure is fixed and defined in advance. The granularity and thereby the size of the intervals of the leaves is also defined in advance. The data structure is built only once. Neither nodes nor leaves are ever inserted, deleted or moved. Hence, the running time of the operations does not depend on the number of reservations previously made. The running time does not depend on the size of the interval of the reservation either. Let $n$ be the number of leaves in the data structure. In the worst case, the number of touched (i.e. traversed) nodes is in any operation $O(\log n)$, hence the running time of any operation is also $O(\log n)$. | Static Data Structure for Discrete Advance Bandwidth Reservations on the
Internet | 3,967 |
We introduce a novel definition of approximate palindromes in strings, and provide an algorithm to find all maximal approximate palindromes in a string with up to $k$ errors. Our definition is based on the usual edit operations of approximate pattern matching, and the algorithm we give, for a string of size $n$ on a fixed alphabet, runs in $O(k^2 n)$ time. We also discuss two implementation-related improvements to the algorithm, and demonstrate their efficacy in practice by means of both experiments and an average-case analysis. | Finding approximate palindromes in strings | 3,968 |
We introduce top trees as a design of a new simpler interface for data structures maintaining information in a fully-dynamic forest. We demonstrate how easy and versatile they are to use on a host of different applications. For example, we show how to maintain the diameter, center, and median of each tree in the forest. The forest can be updated by insertion and deletion of edges and by changes to vertex and edge weights. Each update is supported in O(log n) time, where n is the size of the tree(s) involved in the update. Also, we show how to support nearest common ancestor queries and level ancestor queries with respect to arbitrary roots in O(log n) time. Finally, with marked and unmarked vertices, we show how to compute distances to a nearest marked vertex. The later has applications to approximate nearest marked vertex in general graphs, and thereby to static optimization problems over shortest path metrics. Technically speaking, top trees are easily implemented either with Frederickson's topology trees [Ambivalent Data Structures for Dynamic 2-Edge-Connectivity and k Smallest Spanning Trees, SIAM J. Comput. 26 (2) pp. 484-538, 1997] or with Sleator and Tarjan's dynamic trees [A Data Structure for Dynamic Trees. J. Comput. Syst. Sc. 26 (3) pp. 362-391, 1983]. However, we claim that the interface is simpler for many applications, and indeed our new bounds are quadratic improvements over previous bounds where they exist. | Maintaining Information in Fully-Dynamic Trees with Top Trees | 3,969 |
Wireless sensor networks (WSNs) are emerging as an effective means for environment monitoring. This paper investigates a strategy for energy efficient monitoring in WSNs that partitions the sensors into covers, and then activates the covers iteratively in a round-robin fashion. This approach takes advantage of the overlap created when many sensors monitor a single area. Our work builds upon previous work in "Power Efficient Organization of Wireless Sensor Networks" by Slijepcevic and Potkonjak, where the model is first formulated. We have designed three approximation algorithms for a variation of the SET K-COVER problem, where the objective is to partition the sensors into covers such that the number of covers that include an area, summed over all areas, is maximized. The first algorithm is randomized and partitions the sensors, in expectation, within a fraction 1 - 1/e (~.63) of the optimum. We present two other deterministic approximation algorithms. One is a distributed greedy algorithm with a 1/2 approximation ratio and the other is a centralized greedy algorithm with a 1 - 1/e approximation ratio. We show that it is NP-Complete to guarantee better than 15/16 of the optimal coverage, indicating that all three algorithms perform well with respect to the best approximation algorithm possible. Simulations indicate that in practice, the deterministic algorithms perform far above their worst case bounds, consistently covering more than 72% of what is covered by an optimum solution. Simulations also indicate that the increase in longevity is proportional to the amount of overlap amongst the sensors. The algorithms are fast, easy to use, and according to simulations, significantly increase the longevity of sensor networks. The randomized algorithm in particular seems quite practical. | Set K-Cover Algorithms for Energy Efficient Monitoring in Wireless
Sensor Networks | 3,970 |
We introduce several modifications of the partitioning schemes used in Hoare's quicksort and quickselect algorithms, including ternary schemes which identify keys less or greater than the pivot. We give estimates for the numbers of swaps made by each scheme. Our computational experiments indicate that ternary schemes allow quickselect to identify all keys equal to the selected key at little additional cost. | Partitioning schemes for quicksort and quickselect | 3,971 |
We show that several versions of Floyd and Rivest's algorithm Select for finding the $k$th smallest of $n$ elements require at most $n+\min\{k,n-k\}+o(n)$ comparisons on average and with high probability. This rectifies the analysis of Floyd and Rivest, and extends it to the case of nondistinct elements. Our computational results confirm that Select may be the best algorithm in practice. | Randomized selection with quintary partitions | 3,972 |
We show that several versions of Floyd and Rivest's algorithm Select [Comm.\ ACM {\bf 18} (1975) 173] for finding the $k$th smallest of $n$ elements require at most $n+\min\{k,n-k\}+o(n)$ comparisons on average, even when equal elements occur. This parallels our recent analysis of another variant due to Floyd and Rivest [Comm. ACM {\bf 18} (1975) 165--172]. Our computational results suggest that both variants perform well in practice, and may compete with other selection methods, such as Hoare's Find or quickselect with median-of-3 pivots. | Randomized selection with tripartitioning | 3,973 |
We show that several versions of Floyd and Rivest's improved algorithm Select for finding the $k$th smallest of $n$ elements require at most $n+\min\{k,n-k\}+O(n^{1/2}\ln^{1/2}n)$ comparisons on average and with high probability. This rectifies the analysis of Floyd and Rivest, and extends it to the case of nondistinct elements. Encouraging computational results on large median-finding problems are reported. | Improved randomized selection | 3,974 |
An optimization problem that naturally arises in the study of swarm robotics is the Freeze-Tag Problem (FTP) of how to awaken a set of ``asleep'' robots, by having an awakened robot move to their locations. Once a robot is awake, it can assist in awakening other slumbering robots.The objective is to have all robots awake as early as possible. While the FTP bears some resemblance to problems from areas in combinatorial optimization such as routing, broadcasting, scheduling, and covering, its algorithmic characteristics are surprisingly different. We consider both scenarios on graphs and in geometric environments.In graphs, robots sleep at vertices and there is a length function on the edges. Awake robots travel along edges, with time depending on edge length. For most scenarios, we consider the offline version of the problem, in which each awake robot knows the position of all other robots. We prove that the problem is NP-hard, even for the special case of star graphs. We also establish hardness of approximation, showing that it is NP-hard to obtain an approximation factor better than 5/3, even for graphs of bounded degree.These lower bounds are complemented with several positive algorithmic results, including: (1) We show that the natural greedy strategy on star graphs has a tight worst-case performance of 7/3 and give a polynomial-time approximation scheme (PTAS) for star graphs. (2) We give a simple O(log D)-competitive online algorithm for graphs with maximum degree D and locally bounded edge weights. (3) We give a PTAS, running in nearly linear time, for geometrically embedded instances. | The Freeze-Tag Problem: How to Wake Up a Swarm of Robots | 3,975 |
We generalize univariate multipoint evaluation of polynomials of degree n at sublinear amortized cost per point. More precisely, it is shown how to evaluate a bivariate polynomial p of maximum degree less than n, specified by its n^2 coefficients, simultaneously at n^2 given points using a total of O(n^{2.667}) arithmetic operations. In terms of the input size N being quadratic in n, this amounts to an amortized cost of O(N^{0.334}) per point. | Fast Multipoint-Evaluation of Bivariate Polynomials | 3,976 |
In this paper, we present a probabilistic self-balancing dictionary data structure for massive data sets, and prove expected amortized I/O-optimal bounds on the dictionary operations. We show how to use the structure as an I/O-optimal priority queue. The data structure, which we call as the random buffer tree, abstracts the properties of the random treap and the buffer tree and has the same expected I/O-bounds as the buffer tree. | The Random Buffer Tree : A Randomized Technique for I/O-efficient
Algorithms | 3,977 |
We study an interesting family of cooperating coroutines, which is able to generate all patterns of bits that satisfy certain fairly general ordering constraints, changing only one bit at a time. (More precisely, the directed graph of constraints is required to be cycle-free when it is regarded as an undirected graph.) If the coroutines are implemented carefully, they yield an algorithm that needs only a bounded amount of computation per bit change, thereby solving an open problem in the field of combinatorial pattern generation. | Efficient coroutine generation of constrained Gray sequences | 3,978 |
The maximum intersection problem for a matroid and a greedoid, given by polynomial-time oracles, is shown $NP$-hard by expressing the satisfiability of boolean formulas in 3-conjunctive normal form as such an intersection. The corresponding approximation problems are shown $NP$-hard for certain approximation performance bounds. Moreover, some natural parameterized variants of the problem are shown $W[P]$-hard. The results are in contrast with the maximum matroid-matroid intersection which is solvable in polynomial time by an old result of Edmonds. We also prove that it is $NP$-hard to approximate the weighted greedoid maximization within $2^{n^{O(1)}}$ where $n$ is the size of the domain of the greedoid. A preliminary version ``The Complexity of Maximum Matroid-Greedoid Intersection'' appeared in Proc. FCT 2001, LNCS 2138, pp. 535--539, Springer-Verlag 2001. | The Complexity of Maximum Matroid-Greedoid Intersection and Weighted
Greedoid Maximization | 3,979 |
A nearly logarithmic lower bound on the randomized competitive ratio for the metrical task systems problem is presented. This implies a similar lower bound for the extensively studied k-server problem. The proof is based on Ramsey-type theorems for metric spaces, that state that every metric space contains a large subspace which is approximately a hierarchically well-separated tree (and in particular an ultrametric). These Ramsey-type theorems may be of independent interest. | Ramsey-type theorems for metric spaces with applications to online
problems | 3,980 |
Unfair metrical task systems are a generalization of online metrical task systems. In this paper we introduce new techniques to combine algorithms for unfair metrical task systems and apply these techniques to obtain improved randomized online algorithms for metrical task systems on arbitrary metric spaces. | Better algorithms for unfair metrical task systems and applications | 3,981 |
This paper is concerned with online caching algorithms for the (n,k)-companion cache, defined by Brehob et. al. In this model the cache is composed of two components: a k-way set-associative cache and a companion fully-associative cache of size n. We show that the deterministic competitive ratio for this problem is (n+1)(k+1)-1, and the randomized competitive ratio is O(\log n \log k) and \Omega(\log n +\log k). | Online Companion Caching | 3,982 |
We consider the problem of searching for an object on a line at an unknown distance OPT from the original position of the searcher, in the presence of a cost of d for each time the searcher changes direction. This is a generalization of the well-studied linear-search problem. We describe a strategy that is guaranteed to find the object at a cost of at most 9*OPT + 2d, which has the optimal competitive ratio 9 with respect to OPT plus the minimum corresponding additive term. Our argument for upper and lower bound uses an infinite linear program, which we solve by experimental solution of an infinite series of approximating finite linear programs, estimating the limits, and solving the resulting recurrences. We feel that this technique is interesting in its own right and should help solve other searching problems. In particular, we consider the star search or cow-path problem with turn cost, where the hidden object is placed on one of m rays emanating from the original position of the searcher. For this problem we give a tight bound of (1+(2(m^m)/((m-1)^(m-1))) OPT + m ((m/(m-1))^(m-1) - 1) d. We also discuss tradeoff between the corresponding coefficients, and briefly consider randomized strategies on the line. | Online Searching with Turn Cost | 3,983 |
Traditional Insertion Sort runs in O(n^2) time because each insertion takes O(n) time. When people run Insertion Sort in the physical world, they leave gaps between items to accelerate insertions. Gaps help in computers as well. This paper shows that Gapped Insertion Sort has insertion times of O(log n) with high probability, yielding a total running time of O(n log n) with high probability. | Insertion Sort is O(n log n) | 3,984 |
The study of hashing is closely related to the analysis of balls and bins. It is well-known that instead of using a single hash function if we randomly hash a ball into two bins and place it in the smaller of the two, then this dramatically lowers the maximum load on bins. This leads to the concept of two-way hashing where the largest bucket contains $O(\log\log n)$ balls with high probability. The hash look up will now search in both the buckets an item hashes to. Since an item may be placed in one of two buckets, we could potentially move an item after it has been initially placed to reduce maximum load. with a maximum load of We show that by performing moves during inserts, a maximum load of 2 can be maintained on-line, with high probability, while supporting hash update operations. In fact, with $n$ buckets, even if the space for two items are pre-allocated per bucket, as may be desirable in hardware implementations, more than $n$ items can be stored giving a high memory utilization. We also analyze the trade-off between the number of moves performed during inserts and the maximum load on a bucket. By performing at most $h$ moves, we can maintain a maximum load of $O(\frac{\log \log n}{h \log(\log\log n/h)})$. So, even by performing one move, we achieve a better bound than by performing no moves at all. | Efficient Hashing with Lookups in two Memory Accesses | 3,985 |
We describe algorithms, based on Avis and Fukuda's reverse search paradigm, for listing all maximal independent sets in a sparse graph in polynomial time and delay per output. For bounded degree graphs, our algorithms take constant time per set generated; for minor-closed graph families, the time is O(n) per set, and for more general sparse graph families we achieve subquadratic time per set. We also describe new data structures for maintaining a dynamic vertex set S in a sparse or minor-closed graph family, and querying the number of vertices not dominated by S; for minor-closed graph families the time per update is constant, while it is sublinear for any sparse graph family. We can also maintain a dynamic vertex set in an arbitrary m-edge graph and test the independence of the maintained set in time O(sqrt m) per update. We use the domination data structures as part of our enumeration algorithms. | All Maximal Independent Sets and Dynamic Dominance for Sparse Graphs | 3,986 |
Metric embedding has become a common technique in the design of algorithms. Its applicability is often dependent on how high the embedding's distortion is. For example, embedding finite metric space into trees may require linear distortion as a function of its size. Using probabilistic metric embeddings, the bound on the distortion reduces to logarithmic in the size. We make a step in the direction of bypassing the lower bound on the distortion in terms of the size of the metric. We define "multi-embeddings" of metric spaces in which a point is mapped onto a set of points, while keeping the target metric of polynomial size and preserving the distortion of paths. The distortion obtained with such multi-embeddings into ultrametrics is at most O(log Delta loglog Delta) where Delta is the aspect ratio of the metric. In particular, for expander graphs, we are able to obtain constant distortion embeddings into trees in contrast with the Omega(log n) lower bound for all previous notions of embeddings. We demonstrate the algorithmic application of the new embeddings for two optimization problems: group Steiner tree and metrical task systems. | Multi-Embedding of Metric Spaces | 3,987 |
Sorting and hashing are two completely different concepts in computer science, and appear mutually exclusive to one another. Hashing is a search method using the data as a key to map to the location within memory, and is used for rapid storage and retrieval. Sorting is a process of organizing data from a random permutation into an ordered arrangement, and is a common activity performed frequently in a variety of applications. Almost all conventional sorting algorithms work by comparison, and in doing so have a linearithmic greatest lower bound on the algorithmic time complexity. Any improvement in the theoretical time complexity of a sorting algorithm can result in overall larger gains in implementation performance.. A gain in algorithmic performance leads to much larger gains in speed for the application that uses the sort algorithm. Such a sort algorithm needs to use an alternative method for ordering the data than comparison, to exceed the linearithmic time complexity boundary on algorithmic performance. The hash sort is a general purpose non-comparison based sorting algorithm by hashing, which has some interesting features not found in conventional sorting algorithms. The hash sort asymptotically outperforms the fastest traditional sorting algorithm, the quick sort. The hash sort algorithm has a linear time complexity factor -- even in the worst case. The hash sort opens an area for further work and investigation into alternative means of sorting. | Hash sort: A linear time complexity multiple-dimensional sort algorithm | 3,988 |
We study the problem of scheduling equal-length jobs with release times and deadlines, where the objective is to maximize the number of completed jobs. Preemptions are not allowed. In Graham's notation, the problem is described as 1|r_j;p_j=p|\sum U_j. We give the following results: (1) We show that the often cited algorithm by Carlier from 1981 is not correct. (2) We give an algorithm for this problem with running time O(n^5). | A Note on Scheduling Equal-Length Jobs to Maximize Throughput | 3,989 |
Consider laying out a fixed-topology tree of N nodes into external memory with block size B so as to minimize the worst-case number of block memory transfers required to traverse a path from the root to a node of depth D. We prove that the optimal number of memory transfers is $$ \cases{ \displaystyle \Theta\left( {D \over \lg (1{+}B)} \right) & when $D = O(\lg N)$, \cr \displaystyle \Theta\left( {\lg N \over \lg \left(1{+}{B \lg N \over D}\right)} \right) & when $D = \Omega(\lg N)$ and $D = O(B \lg N)$, \cr \displaystyle \Theta\left( {D \over B} \right) & when $D = \Omega(B \lg N)$. } $$ | Worst-Case Optimal Tree Layout in External Memory | 3,990 |
Described are two algorithms to find long approximate palindromes in a string, for example a DNA sequence. A simple algorithm requires O(n)-space and almost always runs in $O(k.n)$-time where n is the length of the string and k is the number of ``errors'' allowed in the palindrome. Its worst-case time-complexity is $O(n^2)$ but this does not occur with real biological sequences. A more complex algorithm guarantees $O(k.n)$ worst-case time complexity. | Finding Approximate Palindromes in Strings Quickly and Simply | 3,991 |
A concept of "evolving categories" is suggested to build a simple, scalable, mathematically consistent framework for representing in uniform way both data and algorithms. A state machine for executing algorithms becomes clear, rich and powerful semantics, based on category theory, and still allows easy implementation. Moreover, it gives an original insight into the nature and semantics of algorithms. | Evolving Categories: Consistent Framework for Representation of Data and
Algorithms | 3,992 |
We study the problem of preemptive scheduling of n equal-length jobs with given release times on m identical parallel machines. The objective is to minimize the average flow time. Recently, Brucker and Kravchenko proved that the optimal schedule can be computed in polynomial time by solving a linear program with O(n^3) variables and constraints, followed by some substantial post-processing (where n is the number of jobs.) In this note we describe a simple linear program with only O(mn) variables and constraints. Our linear program produces directly the optimal schedule and does not require any post-processing. | Preemptive Multi-Machine Scheduling of Equal-Length Jobs to Minimize the
Average Flow Time | 3,993 |
Histograms are used to summarize the contents of relations into a number of buckets for the estimation of query result sizes. Several techniques (e.g., MaxDiff and V-Optimal) have been proposed in the past for determining bucket boundaries which provide accurate estimations. However, while search strategies for optimal bucket boundaries are rather sophisticated, no much attention has been paid for estimating queries inside buckets and all of the above techniques adopt naive methods for such an estimation. This paper focuses on the problem of improving the estimation inside a bucket once its boundaries have been fixed. The proposed technique is based on the addition, to each bucket, of 32-bit additional information (organized into a 4-level tree index), storing approximate cumulative frequencies at 7 internal intervals of the bucket. Both theoretical analysis and experimental results show that, among a number of alternative ways to organize the additional information, the 4-level tree index provides the best frequency estimation inside a bucket. The index is later added to two well-known histograms, MaxDiff and V-Optimal, obtaining the non-obvious result that despite the spatial cost of 4LT which reduces the number of allowed buckets once the storage space has been fixed, the original methods are strongly improved in terms of accuracy. | Enhancing Histograms by Tree-Like Bucket Indices | 3,994 |
We consider the problem of maintaining a dynamic set of integers and answering queries of the form: report a point (equivalently, all points) in a given interval. Range searching is a natural and fundamental variant of integer search, and can be solved using predecessor search. However, for a RAM with w-bit words, we show how to perform updates in O(lg w) time and answer queries in O(lglg w) time. The update time is identical to the van Emde Boas structure, but the query time is exponentially faster. Existing lower bounds show that achieving our query time for predecessor search requires doubly-exponentially slower updates. We present some arguments supporting the conjecture that our solution is optimal. Our solution is based on a new and interesting recursion idea which is "more extreme" that the van Emde Boas recursion. Whereas van Emde Boas uses a simple recursion (repeated halving) on each path in a trie, we use a nontrivial, van Emde Boas-like recursion on every such path. Despite this, our algorithm is quite clean when seen from the right angle. To achieve linear space for our data structure, we solve a problem which is of independent interest. We develop the first scheme for dynamic perfect hashing requiring sublinear space. This gives a dynamic Bloomier filter (an approximate storage scheme for sparse vectors) which uses low space. We strengthen previous lower bounds to show that these results are optimal. | On Dynamic Range Reporting in One Dimension | 3,995 |
In this paper we address two optimization problems arising in the design of genomic assays based on universal tag arrays. First, we address the universal array tag set design problem. For this problem, we extend previous formulations to incorporate antitag-to-antitag hybridization constraints in addition to constraints on antitag-to-tag hybridization specificity, establish a constructive upper bound on the maximum number of tags satisfying the extended constraints, and propose a simple greedy tag selection algorithm. Second, we give methods for improving the multiplexing rate in large-scale genomic assays by combining primer selection with tag assignment. Experimental results on simulated data show that this integrated optimization leads to reductions of up to 50% in the number of required arrays. | Improved Tag Set Design and Multiplexing Algorithms for Universal Arrays | 3,996 |
String barcoding is a recently introduced technique for genomic-based identification of microorganisms. In this paper we describe the engineering of highly scalable algorithms for robust string barcoding. Our methods enable distinguisher selection based on whole genomic sequences of hundreds of microorganisms of up to bacterial size on a well-equipped workstation, and can be easily parallelized to further extend the applicability range to thousands of bacterial size genomes. Experimental results on both randomly generated and NCBI genomic data show that whole-genome based selection results in a number of distinguishers nearly matching the information theoretic lower bounds for the problem. | Highly Scalable Algorithms for Robust String Barcoding | 3,997 |
In this paper we propose new solution methods for designing tag sets for use in universal DNA arrays. First, we give integer linear programming formulations for two previous formalizations of the tag set design problem, and show that these formulations can be solved to optimality for instance sizes of practical interest by using general purpose optimization packages. Second, we note the benefits of periodic tags, and establish an interesting connection between the tag design problem and the problem of packing the maximum number of vertex-disjoint directed cycles in a given graph. We show that combining a simple greedy cycle packing algorithm with a previously proposed alphabetic tree search strategy yields an increase of over 40% in the number of tags compared to previous methods. | Exact and Approximation Algorithms for DNA Tag Set Design | 3,998 |
We continue the investigation of problems concerning correlation clustering or clustering with qualitative information, which is a clustering formulation that has been studied recently. The basic setup here is that we are given as input a complete graph on n nodes (which correspond to nodes to be clustered) whose edges are labeled + (for similar pairs of items) and - (for dissimilar pairs of items). Thus we have only as input qualitative information on similarity and no quantitative distance measure between items. The quality of a clustering is measured in terms of its number of agreements, which is simply the number of edges it correctly classifies, that is the sum of number of - edges whose endpoints it places in different clusters plus the number of + edges both of whose endpoints it places within the same cluster. In this paper, we study the problem of finding clusterings that maximize the number of agreements, and the complementary minimization version where we seek clusterings that minimize the number of disagreements. We focus on the situation when the number of clusters is stipulated to be a small constant k. Our main result is that for every k, there is a polynomial time approximation scheme for both maximizing agreements and minimizing disagreements. (The problems are NP-hard for every k >= 2.) The main technical work is for the minimization version, as the PTAS for maximizing agreements follows along the lines of the property tester for Max k-CUT. In contrast, when the number of clusters is not specified, the problem of minimizing disagreements was shown to be APX-hard, even though the maximization version admits a PTAS. | Correlation Clustering with a Fixed Number of Clusters | 3,999 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.