text
stringlengths
0
4.09k
Title: Approximating Data with weighted smoothing Splines
Abstract: Given a data set (t_i, y_i), i=1,..., n with the t_i in [0,1] non-parametric regression is concerned with the problem of specifying a suitable function f_n:[0,1] -> R such that the data can be reasonably approximated by the points (t_i, f_n(t_i)), i=1,..., n. If a data set exhibits large variations in local behaviour, for example large peaks as in spectroscopy data, then the method must be able to adapt to the local changes in smoothness. Whilst many methods are able to accomplish this they are less successful at adapting derivatives. In this paper we show how the goal of local adaptivity of the function and its first and second derivatives can be attained in a simple manner using weighted smoothing splines. A residual based concept of approximation is used which forces local adaptivity of the regression function together with a global regularization which makes the function as smooth as possible subject to the approximation constraints.
Title: PAC-Bayesian Bounds for Randomized Empirical Risk Minimizers
Abstract: The aim of this paper is to generalize the PAC-Bayesian theorems proved by Catoni in the classification setting to more general problems of statistical inference. We show how to control the deviations of the risk of randomized estimators. A particular attention is paid to randomized estimators drawn in a small neighborhood of classical estimators, whose study leads to control the risk of the latter. These results allow to bound the risk of very general estimation procedures, as well as to perform model selection.
Title: Hierarchy construction schemes within the Scale set framework
Abstract: Segmentation algorithms based on an energy minimisation framework often depend on a scale parameter which balances a fit to data and a regularising term. Irregular pyramids are defined as a stack of graphs successively reduced. Within this framework, the scale is often defined implicitly as the height in the pyramid. However, each level of an irregular pyramid can not usually be readily associated to the global optimum of an energy or a global criterion on the base level graph. This last drawback is addressed by the scale set framework designed by Guigues. The methods designed by this author allow to build a hierarchy and to design cuts within this hierarchy which globally minimise an energy. This paper studies the influence of the construction scheme of the initial hierarchy on the resulting optimal cuts. We propose one sequential and one parallel method with two variations within both. Our sequential methods provide partitions near the global optima while parallel methods require less execution times than the sequential method of Guigues even on sequential machines.
Title: Medical image computing and computer-aided medical interventions applied to soft tissues. Work in progress in urology
Abstract: Until recently, Computer-Aided Medical Interventions (CAMI) and Medical Robotics have focused on rigid and non deformable anatomical structures. Nowadays, special attention is paid to soft tissues, raising complex issues due to their mobility and deformation. Mini-invasive digestive surgery was probably one of the first fields where soft tissues were handled through the development of simulators, tracking of anatomical structures and specific assistance robots. However, other clinical domains, for instance urology, are concerned. Indeed, laparoscopic surgery, new tumour destruction techniques (e.g. HIFU, radiofrequency, or cryoablation), increasingly early detection of cancer, and use of interventional and diagnostic imaging modalities, recently opened new challenges to the urologist and scientists involved in CAMI. This resulted in the last five years in a very significant increase of research and developments of computer-aided urology systems. In this paper, we propose a description of the main problems related to computer-aided diagnostic and therapy of soft tissues and give a survey of the different types of assistance offered to the urologist: robotization, image fusion, surgical navigation. Both research projects and operational industrial systems are discussed.
Title: Numerical Sensitivity and Efficiency in the Treatment of Epistemic and Aleatory Uncertainty
Abstract: The treatment of both aleatory and epistemic uncertainty by recent methods often requires an high computational effort. In this abstract, we propose a numerical sampling method allowing to lighten the computational burden of treating the information by means of so-called fuzzy random variables.
Title: Robustly estimating the flow direction of information in complex physical systems
Abstract: We propose a new measure to estimate the direction of information flux in multivariate time series from complex systems. This measure, based on the slope of the phase spectrum (Phase Slope Index) has invariance properties that are important for applications in real physical or biological systems: (a) it is strictly insensitive to mixtures of arbitrary independent sources, (b) it gives meaningful results even if the phase spectrum is not linear, and (c) it properly weights contributions from different frequencies. Simulations of a class of coupled multivariate random data show that for truly unidirectional information flow without additional noise contamination our measure detects the correct direction as good as the standard Granger causality. For random mixtures of independent sources Granger Causality erroneously yields highly significant results whereas our measure correctly becomes non-significant. An application of our novel method to EEG data (88 subjects in eyes-closed condition) reveals a strikingly clear front-to-back information flow in the vast majority of subjects and thus contributes to a better understanding of information processing in the brain.
Title: Decomposition During Search for Propagation-Based Constraint Solvers
Abstract: We describe decomposition during search (DDS), an integration of And/Or tree search into propagation-based constraint solvers. The presented search algorithm dynamically decomposes sub-problems of a constraint satisfaction problem into independent partial problems, avoiding redundant work. The paper discusses how DDS interacts with key features that make propagation-based solvers successful: constraint propagation, especially for global constraints, and dynamic search heuristics. We have implemented DDS for the Gecode constraint programming library. Two applications, solution counting in graph coloring and protein structure prediction, exemplify the benefits of DDS in practice.
Title: A New Theoretic Foundation for Cross-Layer Optimization
Abstract: Cross-layer optimization solutions have been proposed in recent years to improve the performance of network users operating in a time-varying, error-prone wireless environment. However, these solutions often rely on ad-hoc optimization approaches, which ignore the different environmental dynamics experienced at various layers by a user and violate the layered network architecture of the protocol stack by requiring layers to provide access to their internal protocol parameters to other layers. This paper presents a new theoretic foundation for cross-layer optimization, which allows each layer to make autonomous decisions individually, while maximizing the utility of the wireless user by optimally determining what information needs to be exchanged among layers. Hence, this cross-layer framework does not change the current layered architecture. Specifically, because the wireless user interacts with the environment at various layers of the protocol stack, the cross-layer optimization problem is formulated as a layered Markov decision process (MDP) in which each layer adapts its own protocol parameters and exchanges information (messages) with other layers in order to cooperatively maximize the performance of the wireless user. The message exchange mechanism for determining the optimal cross-layer transmission strategies has been designed for both off-line optimization and on-line dynamic adaptation. We also show that many existing cross-layer optimization algorithms can be formulated as simplified, sub-optimal, versions of our layered MDP framework.
Title: Variational inference for large-scale models of discrete choice
Abstract: Discrete choice models are commonly used by applied statisticians in numerous fields, such as marketing, economics, finance, and operations research. When agents in discrete choice models are assumed to have differing preferences, exact inference is often intractable. Markov chain Monte Carlo techniques make approximate inference possible, but the computational cost is prohibitive on the large data sets now becoming routinely available. Variational methods provide a deterministic alternative for approximation of the posterior distribution. We derive variational procedures for empirical Bayes and fully Bayesian inference in the mixed multinomial logit model of discrete choice. The algorithms require only that we solve a sequence of unconstrained optimization problems, which are shown to be convex. Extensive simulations demonstrate that variational methods achieve accuracy competitive with Markov chain Monte Carlo, at a small fraction of the computational cost. Thus, variational methods permit inferences on data sets that otherwise could not be analyzed without bias-inducing modifications to the underlying model.
Title: Analysis of nonlinear modes of variation for functional data
Abstract: A set of curves or images of similar shape is an increasingly common functional data set collected in the sciences. Principal Component Analysis (PCA) is the most widely used technique to decompose variation in functional data. However, the linear modes of variation found by PCA are not always interpretable by the experimenters. In addition, the modes of variation of interest to the experimenter are not always linear. We present in this paper a new analysis of variance for Functional Data. Our method was motivated by decomposing the variation in the data into predetermined and interpretable directions (i.e. modes) of interest. Since some of these modes could be nonlinear, we develop a new defined ratio of sums of squares which takes into account the curvature of the space of variation. We discuss, in the general case, consistency of our estimates of variation, using mathematical tools from differential geometry and shape statistics. We successfully applied our method to a motivating example of biological data. This decomposition allows biologists to compare the prevalence of different genetic tradeoffs in a population and to quantify the effect of selection on evolution.
Title: An Approximation Ratio for Biclustering
Abstract: The problem of biclustering consists of the simultaneous clustering of rows and columns of a matrix such that each of the submatrices induced by a pair of row and column clusters is as uniform as possible. In this paper we approximate the optimal biclustering by applying one-way clustering algorithms independently on the rows and on the columns of the input matrix. We show that such a solution yields a worst-case approximation ratio of 1+sqrt(2) under L1-norm for 0-1 valued matrices, and of 2 under L2-norm for real valued matrices.
Title: The Banff Challenge: Statistical Detection of a Noisy Signal
Abstract: Particle physics experiments such as those run in the Large Hadron Collider result in huge quantities of data, which are boiled down to a few numbers from which it is hoped that a signal will be detected. We discuss a simple probability model for this and derive frequentist and noninformative Bayesian procedures for inference about the signal. Both are highly accurate in realistic cases, with the frequentist procedure having the edge for interval estimation, and the Bayesian procedure yielding slightly better point estimates. We also argue that the significance, or $p$-value, function based on the modified likelihood root provides a comprehensive presentation of the information in the data and should be used for inference.
Title: Density estimation in linear time
Abstract: We consider the problem of choosing a density estimate from a set of distributions F, minimizing the L1-distance to an unknown distribution (Devroye, Lugosi 2001). Devroye and Lugosi analyze two algorithms for the problem: Scheffe tournament winner and minimum distance estimate. The Scheffe tournament estimate requires fewer computations than the minimum distance estimate, but has strictly weaker guarantees than the latter. We focus on the computational aspect of density estimation. We present two algorithms, both with the same guarantee as the minimum distance estimate. The first one, a modification of the minimum distance estimate, uses the same number (quadratic in |F|) of computations as the Scheffe tournament. The second one, called ``efficient minimum loss-weight estimate,'' uses only a linear number of computations, assuming that F is preprocessed. We also give examples showing that the guarantees of the algorithms cannot be improved and explore randomized algorithms for density estimation.
Title: The source coding game with a cheating switcher
Abstract: Motivated by the lossy compression of an active-vision video stream, we consider the problem of finding the rate-distortion function of an arbitrarily varying source (AVS) composed of a finite number of subsources with known distributions. Berger's paper `The Source Coding Game', \emphIEEE Trans. Inform. Theory, 1971, solves this problem under the condition that the adversary is allowed only strictly causal access to the subsource realizations. We consider the case when the adversary has access to the subsource realizations non-causally. Using the type-covering lemma, this new rate-distortion function is determined to be the maximum of the IID rate-distortion function over a set of source distributions attainable by the adversary. We then extend the results to allow for partial or noisy observations of subsource realizations. We further explore the model by attempting to find the rate-distortion function when the adversary is actually helpful. Finally, a bound is developed on the uniform continuity of the IID rate-distortion function for finite-alphabet sources. The bound is used to give a sufficient number of distributions that need to be sampled to compute the rate-distortion function of an AVS to within a certain accuracy. The bound is also used to give a rate of convergence for the estimate of the rate-distortion function for an unknown IID finite-alphabet source .
Title: A Class of LULU Operators on Multi-Dimensional Arrays
Abstract: The LULU operators for sequences are extended to multi-dimensional arrays via the morphological concept of connection in a way which preserves their essential properties, e.g. they are separators and form a four element fully ordered semi-group. The power of the operators is demonstrated by deriving a total variation preserving discrete pulse decomposition of images.
Title: Gibbs Sampling for a Bayesian Hierarchical General Linear Model
Abstract: We consider a Bayesian hierarchical version of the normal theory general linear model which is practically relevant in the sense that it is general enough to have many applications and it is not straightforward to sample directly from the corresponding posterior distribution. Thus we study a block Gibbs sampler that has the posterior as its invariant distribution. In particular, we establish that the Gibbs sampler converges at a geometric rate. This allows us to establish conditions for a central limit theorem for the ergodic averages used to estimate features of the posterior. Geometric ergodicity is also a key component for using batch means methods to consistently estimate the variance of the asymptotic normal distribution. Together, our results give practitioners the tools to be as confident in inferences based on the observations from the Gibbs sampler as they would be with inferences based on random samples from the posterior. Our theoretical results are illustrated with an application to data on the cost of health plans issued by health maintenance organizations.
Title: Common knowledge logic in a higher order proof assistant?
Abstract: This paper presents experiments on common knowledge logic, conducted with the help of the proof assistant Coq. The main feature of common knowledge logic is the eponymous modality that says that a group of agents shares a knowledge about a certain proposition in a inductive way. This modality is specified by using a fixpoint approach. Furthermore, from these experiments, we discuss and compare the structure of theorems that can be proved in specific theories that use common knowledge logic. Those structures manifests the interplay between the theory (as implemented in the proof assistant Coq) and the metatheory.
Title: CLAIRLIB Documentation v1.03
Abstract: The Clair library is intended to simplify a number of generic tasks in Natural Language Processing (NLP), Information Retrieval (IR), and Network Analysis. Its architecture also allows for external software to be plugged in with very little effort. Functionality native to Clairlib includes Tokenization, Summarization, LexRank, Biased LexRank, Document Clustering, Document Indexing, PageRank, Biased PageRank, Web Graph Analysis, Network Generation, Power Law Distribution Analysis, Network Analysis (clustering coefficient, degree distribution plotting, average shortest path, diameter, triangles, shortest path matrices, connected components), Cosine Similarity, Random Walks on Graphs, Statistics (distributions, tests), Tf, Idf, Community Finding.
Title: Computer- and robot-assisted urological surgery
Abstract: The author reviews the computer and robotic tools available to urologists to help in diagnosis and technical procedures. The first part concerns the contribution of robotics and presents several systems at various stages of development (laboratory prototypes, systems under validation or marketed systems). The second part describes image fusion tools and navigation systems currently under development or evaluation. Several studies on computerized simulation of urological procedures are also presented.
Title: Universal Intelligence: A Definition of Machine Intelligence
Abstract: A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: We take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of intelligence for arbitrary machines. We believe that this equation formally captures the concept of machine intelligence in the broadest reasonable sense. We then show how this formal definition is related to the theory of universal optimal learning agents. Finally, we survey the many other tests and definitions of intelligence that have been proposed for machines.
Title: Graph kernels between point clouds
Abstract: Point clouds are sets of points in two or three dimensions. Most kernel methods for learning on sets of points have not yet dealt with the specific geometrical invariances and practical constraints associated with point clouds in computer vision and graphics. In this paper, we present extensions of graph kernels for point clouds, which allow to use kernel methods for such ob jects as shapes, line drawings, or any three-dimensional point clouds. In order to design rich and numerically efficient kernels with as few free parameters as possible, we use kernels between covariance matrices and their factorizations on graphical models. We derive polynomial time dynamic programming recursions and present applications to recognition of handwritten digits and Chinese characters from few training examples.
Title: Pattern Recognition System Design with Linear Encoding for Discrete Patterns
Abstract: In this paper, designs and analyses of compressive recognition systems are discussed, and also a method of establishing a dual connection between designs of good communication codes and designs of recognition systems is presented. Pattern recognition systems based on compressed patterns and compressed sensor measurements can be designed using low-density matrices. We examine truncation encoding where a subset of the patterns and measurements are stored perfectly while the rest is discarded. We also examine the use of LDPC parity check matrices for compressing measurements and patterns. We show how more general ensembles of good linear codes can be used as the basis for pattern recognition system design, yielding system design strategies for more general noise models.
Title: Network Tomography: Identifiability and Fourier Domain Estimation
Abstract: The statistical problem for network tomography is to infer the distribution of $$, with mutually independent components, from a measurement model $=A$, where $A$ is a given binary matrix representing the routing topology of a network under consideration. The challenge is that the dimension of $$ is much larger than that of $$ and thus the problem is often called ill-posed. This paper studies some statistical aspects of network tomography. We first address the identifiability issue and prove that the $$ distribution is identifiable up to a shift parameter under mild conditions. We then use a mixture model of characteristic functions to derive a fast algorithm for estimating the distribution of $$ based on the General method of Moments. Through extensive model simulation and real Internet trace driven simulation, the proposed approach is shown to be favorable comparing to previous methods using simple discretization for inferring link delays in a heterogeneous network.
Title: Improving the Performance of PieceWise Linear Separation Incremental Algorithms for Practical Hardware Implementations
Abstract: In this paper we shall review the common problems associated with Piecewise Linear Separation incremental algorithms. This kind of neural models yield poor performances when dealing with some classification problems, due to the evolving schemes used to construct the resulting networks. So as to avoid this undesirable behavior we shall propose a modification criterion. It is based upon the definition of a function which will provide information about the quality of the network growth process during the learning phase. This function is evaluated periodically as the network structure evolves, and will permit, as we shall show through exhaustive benchmarks, to considerably improve the performance(measured in terms of network complexity and generalization capabilities) offered by the networks generated by these incremental models.
Title: Framework and Resources for Natural Language Parser Evaluation
Abstract: Because of the wide variety of contemporary practices used in the automatic syntactic parsing of natural languages, it has become necessary to analyze and evaluate the strengths and weaknesses of different approaches. This research is all the more necessary because there are currently no genre- and domain-independent parsers that are able to analyze unrestricted text with 100% preciseness (I use this term to refer to the correctness of analyses assigned by a parser). All these factors create a need for methods and resources that can be used to evaluate and compare parsing systems. This research describes: (1) A theoretical analysis of current achievements in parsing and parser evaluation. (2) A framework (called FEPa) that can be used to carry out practical parser evaluations and comparisons. (3) A set of new evaluation resources: FiEval is a Finnish treebank under construction, and MGTS and RobSet are parser evaluation resources in English. (4) The results of experiments in which the developed evaluation framework and the two resources for English were used for evaluating a set of selected parsers.
Title: Nonparametric estimation for a stochastic volatility model
Abstract: Consider discrete time observations (X_\ell\delta)_1\leq \ell \leq n+1$ of the process $X$ satisfying $dX_t= dB_t$, with $V_t$ a one-dimensional positive diffusion process independent of the Brownian motion $B$. For both the drift and the diffusion coefficient of the unobserved diffusion $V$, we propose nonparametric least square estimators, and provide bounds for theirrisk. Estimators are chosen among a collection of functions belonging to a finite dimensional space whose dimension is selected by a data driven procedure. Implementation on simulated data illustrates how the method works.
Title: Convergence properties of the expected improvement algorithm
Abstract: This paper has been withdrawn from the arXiv. It is now published by Elsevier in the Journal of Statistical Planning and Inference, under the modified title "Convergence properties of the expected improvement algorithm with fixed mean and covariance functions". See http://dx.doi.org/10.1016/j.jspi.2010.04.018 An author-generated post-print version is available from the HAL repository of SUPELEC at http://hal-supelec.archives-ouvertes.fr/hal-00217562 Abstract : "This paper deals with the convergence of the expected improvement algorithm, a popular global optimization algorithm based on a Gaussian process model of the function to be optimized. The first result is that under some mild hypotheses on the covariance function k of the Gaussian process, the expected improvement algorithm produces a dense sequence of evaluation points in the search domain, when the function to be optimized is in the reproducing kernel Hilbert space generated by k. The second result states that the density property also holds for P-almost all continuous functions, where P is the (prior) probability distribution induced by the Gaussian process."
Title: Improved Collaborative Filtering Algorithm via Information Transformation
Abstract: In this paper, we propose a spreading activation approach for collaborative filtering (SA-CF). By using the opinion spreading process, the similarity between any users can be obtained. The algorithm has remarkably higher accuracy than the standard collaborative filtering (CF) using Pearson correlation. Furthermore, we introduce a free parameter $\beta$ to regulate the contributions of objects to user-user correlations. The numerical results indicate that decreasing the influence of popular objects can further improve the algorithmic accuracy and personality. We argue that a better algorithm should simultaneously require less computation and generate higher accuracy. Accordingly, we further propose an algorithm involving only the top-$N$ similar neighbors for each target user, which has both less computational complexity and higher algorithmic accuracy.
Title: Tests of Machine Intelligence
Abstract: Although the definition and measurement of intelligence is clearly of fundamental importance to the field of artificial intelligence, no general survey of definitions and tests of machine intelligence exists. Indeed few researchers are even aware of alternatives to the Turing test and its many derivatives. In this paper we fill this gap by providing a short survey of the many tests of machine intelligence that have been proposed.
Title: A Fast Hierarchical Multilevel Image Segmentation Method using Unbiased Estimators
Abstract: This paper proposes a novel method for segmentation of images by hierarchical multilevel thresholding. The method is global, agglomerative in nature and disregards pixel locations. It involves the optimization of the ratio of the unbiased estimators of within class to between class variances. We obtain a recursive relation at each step for the variances which expedites the process. The efficacy of the method is shown in a comparison with some well-known methods.
Title: TRUST-TECH based Methods for Optimization and Learning
Abstract: Many problems that arise in machine learning domain deal with nonlinearity and quite often demand users to obtain global optimal solutions rather than local optimal ones. Optimization problems are inherent in machine learning algorithms and hence many methods in machine learning were inherited from the optimization literature. Popularly known as the initialization problem, the ideal set of parameters required will significantly depend on the given initialization values. The recently developed TRUST-TECH (TRansformation Under STability-reTaining Equilibria CHaracterization) methodology systematically explores the subspace of the parameters to obtain a complete set of local optimal solutions. In this thesis work, we propose TRUST-TECH based methods for solving several optimization and machine learning problems. Two stages namely, the local stage and the neighborhood-search stage, are repeated alternatively in the solution space to achieve improvements in the quality of the solutions. Our methods were tested on both synthetic and real datasets and the advantages of using this novel framework are clearly manifested. This framework not only reduces the sensitivity to initialization, but also allows the flexibility for the practitioners to use various global and local methods that work well for a particular problem of interest. Other hierarchical stochastic algorithms like evolutionary algorithms and smoothing algorithms are also studied and frameworks for combining these methods with TRUST-TECH have been proposed and evaluated on several test systems.
Title: Simulation of the matrix Bingham-von Mises-Fisher distribution, with applications to multivariate and relational data
Abstract: Orthonormal matrices play an important role in reduced-rank matrix approximations and the analysis of matrix-valued data. A matrix Bingham-von Mises-Fisher distribution is a probability distribution on the set of orthonormal matrices that includes linear and quadratic terms, and arises as a posterior distribution in latent factor models for multivariate and relational data. This article describes rejection and Gibbs sampling algorithms for sampling from this family of distributions, and illustrates their use in the analysis of a protein-protein interaction network.
Title: Probabilistic Visual Secret Sharing Schemes for Gray-scale images and Color images
Abstract: Visual secrete sharing (VSS) is an encryption technique that utilizes human visual system in the recovering of the secret image and it does not require any complex calculation. Pixel expansion has been a major issue of VSS schemes. A number of probabilistic VSS schemes with minimum pixel expansion have been proposed for binary secret images. This paper presents a general probabilistic (k, n)-VSS scheme for gray-scale images and another scheme for color images. With our schemes, the pixel expansion can be set to a user-defined value. When this value is 1, there is no pixel expansion at all. The quality of reconstructed secret images, measured by Average Relative Difference, is equivalent to Relative Difference of existing deterministic schemes. Previous probabilistic VSS schemes for black-and-white images with respect to pixel expansion can be viewed as special cases of the schemes proposed here
Title: Online EM Algorithm for Latent Data Models