text
stringlengths
0
4.09k
Title: Metric Embedding for Nearest Neighbor Classification
Abstract: The distance metric plays an important role in nearest neighbor (NN) classification. Usually the Euclidean distance metric is assumed or a Mahalanobis distance metric is optimized to improve the NN performance. In this paper, we study the problem of embedding arbitrary metric spaces into a Euclidean space with the goal to improve the accuracy of the NN classifier. We propose a solution by appealing to the framework of regularization in a reproducing kernel Hilbert space and prove a representer-like theorem for NN classification. The embedding function is then determined by solving a semidefinite program which has an interesting connection to the soft-margin linear binary support vector machine classifier. Although the main focus of this paper is to present a general, theoretical framework for metric embedding in a NN setting, we demonstrate the performance of the proposed method on some benchmark datasets and show that it performs better than the Mahalanobis metric learning algorithm in terms of leave-one-out and generalization errors.
Title: On probabilities for separating sets of order statistics
Abstract: Consider a set of order statistics that arise from sorting samples from two different populations, each with their own, possibly different distribution function. The probability that these order statistics fall in disjoint, ordered intervals, and that of the smallest statistics, a certain number come from the first populations, are given in terms of the two distribution functions. The result is applied to computing the joint probability of the number of rejections and the number of false rejections for the Benjamini-Hochberg false discovery rate procedure.
Title: A Collection of Definitions of Intelligence
Abstract: This paper is a survey of a large number of informal definitions of ``intelligence'' that the authors have collected over the years. Naturally, compiling a complete list would be impossible as many definitions of intelligence are buried deep inside articles and books. Nevertheless, the 70-odd definitions presented here are, to the authors' knowledge, the largest and most well referenced collection there is.
Title: Scale-sensitive Psi-dimensions: the Capacity Measures for Classifiers Taking Values in R^Q
Abstract: Bounds on the risk play a crucial role in statistical learning theory. They usually involve as capacity measure of the model studied the VC dimension or one of its extensions. In classification, such "VC dimensions" exist for models taking values in 0, 1, 1,..., Q and R. We introduce the generalizations appropriate for the missing case, the one of models with values in R^Q. This provides us with a new guaranteed risk for M-SVMs which appears superior to the existing one.
Title: Distributions associated with general runs and patterns in hidden Markov models
Abstract: This paper gives a method for computing distributions associated with patterns in the state sequence of a hidden Markov model, conditional on observing all or part of the observation sequence. Probabilities are computed for very general classes of patterns (competing patterns and generalized later patterns), and thus, the theory includes as special cases results for a large class of problems that have wide application. The unobserved state sequence is assumed to be Markovian with a general order of dependence. An auxiliary Markov chain is associated with the state sequence and is used to simplify the computations. Two examples are given to illustrate the use of the methodology. Whereas the first application is more to illustrate the basic steps in applying the theory, the second is a more detailed application to DNA sequences, and shows that the methods can be adapted to include restrictions related to biological knowledge.
Title: Event Weighted Tests for Detecting Periodicity in Photon Arrival Times
Abstract: This paper treats the problem of detecting periodicity in a sequence of photon arrival times, which occurs, for example, in attempting to detect gamma-ray pulsars. A particular focus is on how auxiliary information, typically source intensity, background intensity, and incidence angles and energies associated with each photon arrival should be used to maximize the detection power. We construct a class of likelihood-based tests, score tests, which give rise to event weighting in a principled and natural way, and derive expressions quantifying the power of the tests. These results can be used to compare the efficacies of different weight functions, including cuts in energy and incidence angle. The test is targeted toward a template for the periodic lightcurve, and we quantify how deviation from that template affects the power of detection.
Title: SiZer for time series: A new approach to the analysis of trends
Abstract: Smoothing methods and SiZer are a useful statistical tool for discovering statistically significant structure in data. Based on scale space ideas originally developed in the computer vision literature, SiZer (SIgnificant ZERo crossing of the derivatives) is a graphical device to assess which observed features are `really there' and which are just spurious sampling artifacts. In this paper, we develop SiZer like ideas in time series analysis to address the important issue of significance of trends. This is not a straightforward extension, since one data set does not contain the information needed to distinguish `trend' from `dependence'. A new visualization is proposed, which shows the statistician the range of trade-offs that are available. Simulation and real data results illustrate the effectiveness of the method.
Title: Theory of Finite or Infinite Trees Revisited
Abstract: We present in this paper a first-order axiomatization of an extended theory $T$ of finite or infinite trees, built on a signature containing an infinite set of function symbols and a relation $\fini(t)$ which enables to distinguish between finite or infinite trees. We show that $T$ has at least one model and prove its completeness by giving not only a decision procedure, but a full first-order constraint solver which gives clear and explicit solutions for any first-order constraint satisfaction problem in $T$. The solver is given in the form of 16 rewriting rules which transform any first-order constraint $\phi$ into an equivalent disjunction $\phi$ of simple formulas such that $\phi$ is either the formula $\true$ or the formula $\false$ or a formula having at least one free variable, being equivalent neither to $\true$ nor to $\false$ and where the solutions of the free variables are expressed in a clear and explicit way. The correctness of our rules implies the completeness of $T$. We also describe an implementation of our algorithm in CHR (Constraint Handling Rules) and compare the performance with an implementation in C++ and that of a recent decision procedure for decomposable theories.
Title: A Robust Linguistic Platform for Efficient and Domain specific Web Content Analysis
Abstract: Web semantic access in specific domains calls for specialized search engines with enhanced semantic querying and indexing capacities, which pertain both to information retrieval (IR) and to information extraction (IE). A rich linguistic analysis is required either to identify the relevant semantic units to index and weight them according to linguistic specific statistical distribution, or as the basis of an information extraction process. Recent developments make Natural Language Processing (NLP) techniques reliable enough to process large collections of documents and to enrich them with semantic annotations. This paper focuses on the design and the development of a text processing platform, Ogmios, which has been developed in the ALVIS project. The Ogmios platform exploits existing NLP modules and resources, which may be tuned to specific domains and produces linguistically annotated documents. We show how the three constraints of genericity, domain semantic awareness and performance can be handled all together.
Title: On semiparametric regression with O'Sullivan penalised splines
Abstract: This is an expos\'e on the use of O'Sullivan penalised splines in contemporary semiparametric regression, including mixed model and Bayesian formulations. O'Sullivan penalised splines are similar to P-splines, but have an advantage of being a direct generalisation of smoothing splines. Exact expressions for the O'Sullivan penalty matrix are obtained. Comparisons between the two reveals that O'Sullivan penalised splines more closely mimic the natural boundary behaviour of smoothing splines. Implementation in modern computing environments such as Matlab, R and BUGS is discussed.
Title: The random Tukey depth
Abstract: The computation of the Tukey depth, also called halfspace depth, is very demanding, even in low dimensional spaces, because it requires the consideration of all possible one-dimensional projections. In this paper we propose a random depth which approximates the Tukey depth. It only takes into account a finite number of one-dimensional projections which are chosen at random. Thus, this random depth requires a very small computation time even in high dimensional spaces. Moreover, it is easily extended to cover the functional framework. We present some simulations indicating how many projections should be considered depending on the sample size and on the dimension of the sample space. We also compare this depth with some others proposed in the literature. It is noteworthy that the random depth, based on a very low number of projections, obtains results very similar to those obtained with other depths.
Title: A new graphical tool of outliers detection in regression models based on recursive estimation
Abstract: We present in this paper a new tool for outliers detection in the context of multiple regression models. This graphical tool is based on recursive estimation of the parameters. Simulations were carried out to illustrate the performance of this graphical procedure. As a conclusion, this tool is applied to real data containing outliers according to the classical available tools.
Title: Learning from dependent observations
Abstract: In most papers establishing consistency for learning algorithms it is assumed that the observations used for training are realizations of an i.i.d. process. In this paper we go far beyond this classical framework by showing that support vector machines (SVMs) essentially only require that the data-generating process satisfies a certain law of large numbers. We then consider the learnability of SVMs for $\a$-mixing (not necessarily stationary) processes for both classification and regression, where for the latter we explicitly allow unbounded noise.
Title: Consistency of support vector machines for forecasting the evolution of an unknown ergodic dynamical system from observations with unknown noise
Abstract: We consider the problem of forecasting the next (observable) state of an unknown ergodic dynamical system from a noisy observation of the present state. Our main result shows, for example, that support vector machines (SVMs) using Gaussian RBF kernels can learn the best forecaster from a sequence of noisy observations if (a) the unknown observational noise process is bounded and has a summable $\alpha$-mixing rate and (b) the unknown ergodic dynamical system is defined by a Lipschitz continuous function on some compact subset of $^d$ and has a summable decay of correlations for Lipschitz continuous functions. In order to prove this result we first establish a general consistency result for SVMs and all stochastic processes that satisfy a mixing notion that is substantially weaker than $\alpha$-mixing.
Title: Treelets--An adaptive multi-scale basis for sparse unordered data
Abstract: In many modern applications, including analysis of gene expression and text documents, the data are noisy, high-dimensional, and unordered--with no particular meaning to the given order of the variables. Yet, successful learning is often possible due to sparsity: the fact that the data are typically redundant with underlying structures that can be represented by only a few features. In this paper we present treelets--a novel construction of multi-scale bases that extends wavelets to nonsmooth signals. The method is fully adaptive, as it returns a hierarchical tree and an orthonormal basis which both reflect the internal structure of the data. Treelets are especially well-suited as a dimensionality reduction and feature selection tool prior to regression and classification, in situations where sample sizes are small and the data are sparse with unknown groupings of correlated or collinear variables. The method is also simple to implement and analyze theoretically. Here we describe a variety of situations where treelets perform better than principal component analysis, as well as some common variable selection and cluster averaging schemes. We illustrate treelets on a blocked covariance model and on several data sets (hyperspectral image data, DNA microarray data, and internet advertisements) with highly complex dependencies between variables.
Title: The Role of Time in the Creation of Knowledge
Abstract: This paper I assume that in humans the creation of knowledge depends on a discrete time, or stage, sequential decision-making process subjected to a stochastic, information transmitting environment. For each time-stage, this environment randomly transmits Shannon type information-packets to the decision-maker, who examines each of them for relevancy and then determines his optimal choices. Using this set of relevant information-packets, the decision-maker adapts, over time, to the stochastic nature of his environment, and optimizes the subjective expected rate-of-growth of knowledge. The decision-maker's optimal actions, lead to a decision function that involves, over time, his view of the subjective entropy of the environmental process and other important parameters at each time-stage of the process. Using this model of human behavior, one could create psychometric experiments using computer simulation and real decision-makers, to play programmed games to measure the resulting human performance.
Title: Strong confidence intervals for autoregression
Abstract: In this short note I apply the methodology of game-theoretic probability to calculating non-asymptotic confidence intervals for the coefficient of a simple first order scalar autoregressive model. The most distinctive feature of the proposed procedure is that with high probability it produces confidence intervals that always cover the true parameter value when applied sequentially.
Title: Clustering and Feature Selection using Sparse Principal Component Analysis
Abstract: In this paper, we study the application of sparse principal component analysis (PCA) to clustering and feature selection problems. Sparse PCA seeks sparse factors, or linear combinations of the data variables, explaining a maximum amount of variance in the data while having only a limited number of nonzero coefficients. PCA is often used as a simple clustering technique and sparse factors allow us here to interpret the clusters in terms of a reduced set of variables. We begin with a brief introduction and motivation on sparse PCA and detail our implementation of the algorithm in d'Aspremont et al. (2005). We then apply these results to some classic clustering and feature selection problems arising in biology.
Title: Model Selection Through Sparse Maximum Likelihood Estimation
Abstract: We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for the binary case. We test our algorithms on synthetic data, as well as on gene expression and senate voting records data.
Title: Optimal Solutions for Sparse Principal Component Analysis
Abstract: Given a sample covariance matrix, we examine the problem of maximizing the variance explained by a linear combination of the input variables while constraining the number of nonzero coefficients in this combination. This is known as sparse principal component analysis and has a wide array of applications in machine learning and engineering. We formulate a new semidefinite relaxation to this problem and derive a greedy algorithm that computes a full set of good solutions for all target numbers of non zero coefficients, with total complexity O(n^3), where n is the number of variables. We then use the same relaxation to derive sufficient conditions for global optimality of a solution, which can be tested in O(n^3) per pattern. We discuss applications in subset selection and sparse recovery and show on artificial examples and biological data that our algorithm does provide globally optimal solutions in many cases.
Title: Workspace Analysis of the Parallel Module of the VERNE Machine
Abstract: The paper addresses geometric aspects of a spatial three-degree-of-freedom parallel module, which is the parallel module of a hybrid serial-parallel 5-axis machine tool. This parallel module consists of a moving platform that is connected to a fixed base by three non-identical legs. Each leg is made up of one prismatic and two pairs of spherical joint, which are connected in a way that the combined effects of the three legs lead to an over-constrained mechanism with complex motion. This motion is defined as a simultaneous combination of rotation and translation. A method for computing the complete workspace of the VERNE parallel module for various tool lengths is presented. An algorithm describing this method is also introduced.
Title: Very fast watermarking by reversible contrast mapping
Abstract: Reversible contrast mapping (RCM) is a simple integer transform that applies to pairs of pixels. For some pairs of pixels, RCM is invertible, even if the least significant bits (LSBs) of the transformed pixels are lost. The data space occupied by the LSBs is suitable for data hiding. The embedded information bit-rates of the proposed spatial domain reversible watermarking scheme are close to the highest bit-rates reported so far. The scheme does not need additional data compression, and, in terms of mathematical complexity, it appears to be the lowest complexity one proposed up to now. A very fast lookup table implementation is proposed. Robustness against cropping can be ensured as well.
Title: A New Generalization of Chebyshev Inequality for Random Vectors
Abstract: In this article, we derive a new generalization of Chebyshev inequality for random vectors. We demonstrate that the new generalization is much less conservative than the classical generalization.
Title: The Cyborg Astrobiologist: Porting from a wearable computer to the Astrobiology Phone-cam
Abstract: We have used a simple camera phone to significantly improve an `exploration system' for astrobiology and geology. This camera phone will make it much easier to develop and test computer-vision algorithms for future planetary exploration. We envision that the `Astrobiology Phone-cam' exploration system can be fruitfully used in other problem domains as well.
Title: Explicit Formula for Constructing Binomial Confidence Interval with Guaranteed Coverage Probability
Abstract: In this paper, we derive an explicit formula for constructing the confidence interval of binomial parameter with guaranteed coverage probability. The formula overcomes the limitation of normal approximation which is asymptotic in nature and thus inevitably introduce unknown errors in applications. Moreover, the formula is very tight in comparison with classic Clopper-Pearson's approach from the perspective of interval width. Based on the rigorous formula, we also obtain approximate formulas with excellent performance of coverage probability.
Title: Segmentation and Context of Literary and Musical Sequences
Abstract: We test a segmentation algorithm, based on the calculation of the Jensen-Shannon divergence between probability distributions, to two symbolic sequences of literary and musical origin. The first sequence represents the successive appearance of characters in a theatrical play, and the second represents the succession of tones from the twelve-tone scale in a keyboard sonata. The algorithm divides the sequences into segments of maximal compositional divergence between them. For the play, these segments are related to changes in the frequency of appearance of different characters and in the geographical setting of the action. For the sonata, the segments correspond to tonal domains and reveal in detail the characteristic tonal progression of such kind of musical composition.
Title: Singular curves and cusp points in the joint space of 3-RPR parallel manipulators
Abstract: This paper investigates the singular curves in two-dimensional slices of the joint space of a family of planar parallel manipulators. It focuses on special points, referred to as cusp points, which may appear on these curves. Cusp points play an important role in the kinematic behavior of parallel manipulators since they make possible a nonsingular change of assembly mode. The purpose of this study is twofold. First, it reviews an important previous work, which, to the authors' knowledge, has never been exploited yet. Second, it determines the cusp points in any two-dimensional slice of the joint space. First results show that the number of cusp points may vary from zero to eight. This work finds applications in both design and trajectory planning.
Title: Clusters, Graphs, and Networks for Analysing Internet-Web-Supported Communication within a Virtual Community
Abstract: The proposal is to use clusters, graphs and networks as models in order to analyse the Web structure. Clusters, graphs and networks provide knowledge representation and organization. Clusters were generated by co-site analysis. The sample is a set of academic Web sites from the countries belonging to the European Union. These clusters are here revisited from the point of view of graph theory and social network analysis. This is a quantitative and structural analysis. In fact, the Internet is a computer network that connects people and organizations. Thus we may consider it to be a social network. The set of Web academic sites represents an empirical social network, and is viewed as a virtual community. The network structural properties are here analysed applying together cluster analysis, graph theory and social network analysis.
Title: The Kinematics of Manipulators Built From Closed Planar Mechanisms
Abstract: The paper discusses the kinematics of manipulators builts of planar closed kinematic chains. A special kinematic scheme is extracted from the array of these mechanisms that looks the most promising for the creation of different types of robotic manipulators. The structural features of this manipulator determine a number of its original properties that essentially simplify its control. These features allow the main control problems to be effectively overcome by application of the simple kinematic problems. The workspace and singular configurations of a basic planar manipulator are studied. By using a graphic simulation method, motions of the designed mechanism are examined. A prototype of this mechanism was implemented to verify the proposed approach.
Title: Removing Manually-Generated Boilerplate from Electronic Texts: Experiments with Project Gutenberg e-Books
Abstract: Collaborative work on unstructured or semi-structured documents, such as in literature corpora or source code, often involves agreed upon templates containing metadata. These templates are not consistent across users and over time. Rule-based parsing of these templates is expensive to maintain and tends to fail as new documents are added. Statistical techniques based on frequent occurrences have the potential to identify automatically a large fraction of the templates, thus reducing the burden on the programmers. We investigate the case of the Project Gutenberg corpus, where most documents are in ASCII format with preambles and epilogues that are often copied and pasted or manually typed. We show that a statistical approach can solve most cases though some documents require knowledge of English. We also survey various technical solutions that make our approach applicable to large data sets.
Title: Moveability and Collision Analysis for Fully-Parallel Manipulators
Abstract: The aim of this paper is to characterize the moveability of fully-parallel manipulators in the presence of obstacles. Fully parallel manipulators are used in applications where accuracy, stiffness or high speeds and accelerations are required . However, one of its main drawbacks is a relatively small workspace compared to the one of serial manipulators. This is due mainly to the existence of potential internal collisions, and the existence of singularities. In this paper, the notion of free aspect is defined which permits to exhibit domains of the workspace and the joint space free of singularity and collision. The main application of this study is the moveability analysis in the workspace of the manipulator as well as path-planning, control and design.
Title: Working Modes and Aspects in Fully-Parallel Manipulator
Abstract: The aim of this paper is to characterize the notion of aspect in the workspace and in the joint space for parallel manipulators. In opposite to the serial manipulators, the parallel manipulators can admit not only multiple inverse kinematic solutions, but also multiple direct kinematic solutions. The notion of aspect introduced for serial manipulators in [Borrel 86], and redefined for parallel manipulators with only one inverse kinematic solution in [Wenger 1997], is redefined for general fully parallel manipulators. Two Jacobian matrices appear in the kinematic relations between the joint-rate and the Cartesian-velocity vectors, which are called the "inverse kinematics" and the "direct kinematics" matrices. The study of these matrices allow to respectively define the parallel and the serial singularities. The notion of working modes is introduced to separate inverse kinematic solutions. Thus, we can find out domains of the workspace and the joint space exempt of singularity. Application of this study is the moveability analysis in the workspace of the manipulator as well as path-planing and control. This study is illustrated in this paper with a RR-RRR planar parallel manipulator.
Title: The Isoconditioning Loci of A Class of Closed-Chain Manipulators
Abstract: The subject of this paper is a special class of closed-chain manipulators. First, we analyze a family of two-degree-of-freedom (dof) five-bar planar linkages. Two Jacobian matrices appear in the kinematic relations between the joint-rate and the Cartesian-velocity vectors, which are called the ``inverse kinematics" and the "direct kinematics" matrices. It is shown that the loci of points of the workspace where the condition number of the direct-kinematics matrix remains constant, i.e., the isoconditioning loci, are the coupler points of the four-bar linkage obtained upon locking the middle joint of the linkage. Furthermore, if the line of centers of the two actuated revolutes is used as the axis of a third actuated revolute, then a three-dof hybrid manipulator is obtained. The isoconditioning loci of this manipulator are surfaces of revolution generated by the isoconditioning curves of the two-dof manipulator, whose axis of symmetry is that of the third actuated revolute.