text
stringlengths
0
4.09k
Title: Learning Bayesian Networks with the bnlearn R Package
Abstract: bnlearn is an R package which includes several algorithms for learning the structure of Bayesian networks with either discrete or continuous variables. Both constraint-based and score-based algorithms are implemented, and can use the functionality provided by the snow package to improve their performance via parallel computing. Several network scores and conditional independence algorithms are available for both the learning algorithms and independent use. Advanced plotting options are provided by the Rgraphviz package.
Title: Gabor wavelet analysis and the fractional Hilbert transform
Abstract: We propose an amplitude-phase representation of the dual-tree complex wavelet transform (DT-CWT) which provides an intuitive interpretation of the associated complex wavelet coefficients. The representation, in particular, is based on the shifting action of the group of fractional Hilbert transforms (fHT) which allow us to extend the notion of arbitrary phase-shifts beyond pure sinusoids. We explicitly characterize this shifting action for a particular family of Gabor-like wavelets which, in effect, links the corresponding dual-tree transform with the framework of windowed-Fourier analysis. We then extend these ideas to the bivariate DT-CWT based on certain directional extensions of the fHT. In particular, we derive a signal representation involving the superposition of direction-selective wavelets affected with appropriate phase-shifts.
Title: Self-consistent method for density estimation
Abstract: The estimation of a density profile from experimental data points is a challenging problem, usually tackled by plotting a histogram. Prior assumptions on the nature of the density, from its smoothness to the specification of its form, allow the design of more accurate estimation procedures, such as Maximum Likelihood. Our aim is to construct a procedure that makes no explicit assumptions, but still providing an accurate estimate of the density. We introduce the self-consistent estimate: the power spectrum of a candidate density is given, and an estimation procedure is constructed on the assumption, to be released , that the candidate is correct. The self-consistent estimate is defined as a prior candidate density that precisely reproduces itself. Our main result is to derive the exact expression of the self-consistent estimate for any given dataset, and to study its properties. Applications of the method require neither priors on the form of the density nor the subjective choice of parameters. A cutoff frequency, akin to a bin size or a kernel bandwidth, emerges naturally from the derivation. We apply the self-consistent estimate to artificial data generated from various distributions and show that it reaches the theoretical limit for the scaling of the square error with the dataset size.
Title: Fast adaptive elliptical filtering using box splines
Abstract: We demonstrate that it is possible to filter an image with an elliptic window of varying size, elongation and orientation with a fixed computational cost per pixel. Our method involves the application of a suitable global pre-integrator followed by a pointwise-adaptive localization mesh. We present the basic theory for the 1D case using a B-spline formalism and then appropriately extend it to 2D using radially-uniform box splines. The size and ellipticity of these radially-uniform box splines is adaptively controlled. Moreover, they converge to Gaussians as the order increases. Finally, we present a fast and practical directional filtering algorithm that has the capability of adapting to the local image features.
Title: Learning networks from high dimensional binary data: An application to genomic instability data
Abstract: Genomic instability, the propensity of aberrations in chromosomes, plays a critical role in the development of many diseases. High throughput genotyping experiments have been performed to study genomic instability in diseases. The output of such experiments can be summarized as high dimensional binary vectors, where each binary variable records aberration status at one marker locus. It is of keen interest to understand how these aberrations interact with each other. In this paper, we propose a novel method, , to infer the interactions among aberration events. The method is based on penalized logistic regression with an extension to account for spatial correlation in the genomic instability data. We conduct extensive simulation studies and show that the proposed method performs well in the situations considered. Finally, we illustrate the method using genomic instability data from breast cancer samples.
Title: A Dynamic Boundary Guarding Problem with Translating Targets
Abstract: We introduce a problem in which a service vehicle seeks to guard a deadline (boundary) from dynamically arriving mobile targets. The environment is a rectangle and the deadline is one of its edges. Targets arrive continuously over time on the edge opposite the deadline, and move towards the deadline at a fixed speed. The goal for the vehicle is to maximize the fraction of targets that are captured before reaching the deadline. We consider two cases; when the service vehicle is faster than the targets, and; when the service vehicle is slower than the targets. In the first case we develop a novel vehicle policy based on computing longest paths in a directed acyclic graph. We give a lower bound on the capture fraction of the policy and show that the policy is optimal when the distance between the target arrival edge and deadline becomes very large. We present numerical results which suggest near optimal performance away from this limiting regime. In the second case, when the targets are slower than the vehicle, we propose a policy based on servicing fractions of the translational minimum Hamiltonian path. In the limit of low target speed and high arrival rate, the capture fraction of this policy is within a small constant factor of the optimal.
Title: A simple sketching algorithm for entropy estimation
Abstract: We consider the problem of approximating the empirical Shannon entropy of a high-frequency data stream under the relaxed strict-turnstile model, when space limitations make exact computation infeasible. An equivalent measure of entropy is the Renyi entropy that depends on a constant alpha. This quantity can be estimated efficiently and unbiasedly from a low-dimensional synopsis called an alpha-stable data sketch via the method of compressed counting. An approximation to the Shannon entropy can be obtained from the Renyi entropy by taking alpha sufficiently close to 1. However, practical guidelines for parameter calibration with respect to alpha are lacking. We avoid this problem by showing that the random variables used in estimating the Renyi entropy can be transformed to have a proper distributional limit as alpha approaches 1: the maximally skewed, strictly stable distribution with alpha = 1 defined on the entire real line. We propose a family of asymptotically unbiased log-mean estimators of the Shannon entropy, indexed by a constant zeta > 0, that can be computed in a single-pass algorithm to provide an additive approximation. We recommend the log-mean estimator with zeta = 1 that has exponentially decreasing tail bounds on the error probability, asymptotic relative efficiency of 0.932, and near-optimal computational complexity.
Title: An improved axiomatic definition of information granulation
Abstract: To capture the uncertainty of information or knowledge in information systems, various information granulations, also known as knowledge granulations, have been proposed. Recently, several axiomatic definitions of information granulation have been introduced. In this paper, we try to improve these axiomatic definitions and give a universal construction of information granulation by relating information granulations with a class of functions of multiple variables. We show that the improved axiomatic definition has some concrete information granulations in the literature as instances.
Title: Numerical Comparison of Cusum and Shiryaev-Roberts Procedures for Detecting Changes in Distributions
Abstract: The CUSUM procedure is known to be optimal for detecting a change in distribution under a minimax scenario, whereas the Shiryaev-Roberts procedure is optimal for detecting a change that occurs at a distant time horizon. As a simpler alternative to the conventional Monte Carlo approach, we propose a numerical method for the systematic comparison of the two detection schemes in both settings, i.e., minimax and for detecting changes that occur in the distant future. Our goal is accomplished by deriving a set of exact integral equations for the performance metrics, which are then solved numerically. We present detailed numerical results for the problem of detecting a change in the mean of a Gaussian sequence, which show that the difference between the two procedures is significant only when detecting small changes.
Title: ABC-LogitBoost for Multi-class Classification
Abstract: We develop abc-logitboost, based on the prior work on abc-boost and robust logitboost. Our extensive experiments on a variety of datasets demonstrate the considerable improvement of abc-logitboost over logitboost and abc-mart.
Title: Decentralized Sequential Hypothesis Testing using Asynchronous Communication
Abstract: We present a test for the problem of decentralized sequential hypothesis testing, which is asymptotically optimum. By selecting a suitable sampling mechanism at each sensor, communication between sensors and fusion center is asynchronous and limited to 1-bit data. The proposed SPRT-like test turns out to be order-2 asymptotically optimum in the case of continuous time and continuous path signals, while in discrete time this strong asymptotic optimality property is preserved under proper conditions. If these conditions do not hold, then we can show optimality of order-1. Simulations corroborate the excellent performance characteristics of the test of interest.
Title: Co-occurrence Matrix and Fractal Dimension for Image Segmentation
Abstract: One of the most important tasks in image processing problem and machine vision is object recognition, and the success of many proposed methods relies on a suitable choice of algorithm for the segmentation of an image. This paper focuses on how to apply texture operators based on the concept of fractal dimension and cooccurence matrix, to the problem of object recognition and a new method based on fractal dimension is introduced. Several images, in which the result of the segmentation can be shown, are used to illustrate the use of each method and a comparative study of each operator is made.
Title: Handwritten Farsi Character Recognition using Artificial Neural Network
Abstract: Neural Networks are being used for character recognition from last many years but most of the work was confined to English character recognition. Till date, a very little work has been reported for Handwritten Farsi Character recognition. In this paper, we have made an attempt to recognize handwritten Farsi characters by using a multilayer perceptron with one hidden layer. The error backpropagation algorithm has been used to train the MLP network. In addition, an analysis has been carried out to determine the number of hidden nodes to achieve high performance of backpropagation network in the recognition of handwritten Farsi characters. The system has been trained using several different forms of handwriting provided by both male and female participants of different age groups. Finally, this rigorous training results an automatic HCR system using MLP network. In this work, the experiments were carried out on two hundred fifty samples of five writers. The results showed that the MLP networks trained by the error backpropagation algorithm are superior in recognition accuracy and memory usage. The result indicates that the backpropagation network provides good recognition accuracy of more than 80% of handwritten Farsi characters.
Title: Multiple Retrieval Models and Regression Models for Prior Art Search
Abstract: This paper presents the system called PATATRAS (PATent and Article Tracking, Retrieval and AnalysiS) realized for the IP track of CLEF 2009. Our approach presents three main characteristics: 1. The usage of multiple retrieval models (KL, Okapi) and term index definitions (lemma, phrase, concept) for the three languages considered in the present track (English, French, German) producing ten different sets of ranked results. 2. The merging of the different results based on multiple regression models using an additional validation set created from the patent collection. 3. The exploitation of patent metadata and of the citation structures for creating restricted initial working sets of patents and for producing a final re-ranking regression model. As we exploit specific metadata of the patent documents and the citation relations only at the creation of initial working sets and during the final post ranking step, our architecture remains generic and easy to extend.
Title: Geometry of the restricted Boltzmann machine
Abstract: The restricted Boltzmann machine is a graphical model for binary random variables. Based on a complete bipartite graph separating hidden and observed variables, it is the binary analog to the factor analysis model. We study this graphical model from the perspectives of algebraic statistics and tropical geometry, starting with the observation that its Zariski closure is a Hadamard power of the first secant variety of the Segre variety of projective lines. We derive a dimension formula for the tropicalized model, and we use it to show that the restricted Boltzmann machine is identifiable in many cases. Our methods include coding theory and geometry of linear threshold functions.
Title: An OLAC Extension for Dravidian Languages
Abstract: OLAC was founded in 2000 for creating online databases of language resources. This paper intends to review the bottom-up distributed character of the project and proposes an extension of the architecture for Dravidian languages. An ontological structure is considered for effective natural language processing (NLP) and its advantages over statistical methods are reviewed
Title: Connecting tables with zero-one entries by a subset of a Markov basis
Abstract: We discuss connecting tables with zero-one entries by a subset of a Markov basis. In this paper, as a Markov basis we consider the Graver basis, which corresponds to the unique minimal Markov basis for the Lawrence lifting of the original configuration. Since the Graver basis tends to be large, it is of interest to clarify conditions such that a subset of the Graver basis, in particular a minimal Markov basis itself, connects tables with zero-one entries. We give some theoretical results on the connectivity of tables with zero-one entries. We also study some common models, where a minimal Markov basis for tables without the zero-one restriction does not connect tables with zero-one entries.
Title: The eel-like robot
Abstract: The aim of this project is to design, study and build an "eel-like robot" prototype able to swim in three dimensions. The study is based on the analysis of eel swimming and results in the realization of a prototype with 12 vertebrae, a skin and a head with two fins. To reach these objectives, a multidisciplinary group of teams and laboratories has been formed in the framework of two French projects.
Title: Bayesian orthogonal component analysis for sparse representation
Abstract: This paper addresses the problem of identifying a lower dimensional space where observed data can be sparsely represented. This under-complete dictionary learning task can be formulated as a blind separation problem of sparse sources linearly mixed with an unknown orthogonal mixing matrix. This issue is formulated in a Bayesian framework. First, the unknown sparse sources are modeled as Bernoulli-Gaussian processes. To promote sparsity, a weighted mixture of an atom at zero and a Gaussian distribution is proposed as prior distribution for the unobserved sources. A non-informative prior distribution defined on an appropriate Stiefel manifold is elected for the mixing matrix. The Bayesian inference on the unknown parameters is conducted using a Markov chain Monte Carlo (MCMC) method. A partially collapsed Gibbs sampler is designed to generate samples asymptotically distributed according to the joint posterior distribution of the unknown model parameters and hyperparameters. These samples are then used to approximate the joint maximum a posteriori estimator of the sources and mixing matrix. Simulations conducted on synthetic data are reported to illustrate the performance of the method for recovering sparse representations. An application to sparse coding on under-complete dictionary is finally investigated.
Title: On the optimal design of parallel robots taking into account their deformations and natural frequencies
Abstract: This paper discusses the utility of using simple stiffness and vibrations models, based on the Jacobian matrix of a manipulator and only the rigidity of the actuators, whenever its geometry is optimised. In many works, these simplified models are used to propose optimal design of robots. However, the elasticity of the drive system is often negligible in comparison with the elasticity of the elements, especially in applications where high dynamic performances are needed. Therefore, the use of such a simplified model may lead to the creation of robots with long legs, which will be submitted to large bending and twisting deformations. This paper presents an example of manipulator for which it is preferable to use a complete stiffness or vibration model to obtain the most suitable design and shows that the use of simplified models can lead to mechanisms with poorer rigidity.
Title: On the Internal Topological Structure of Plane Regions
Abstract: The study of topological information of spatial objects has for a long time been a focus of research in disciplines like computational geometry, spatial reasoning, cognitive science, and robotics. While the majority of these researches emphasised the topological relations between spatial objects, this work studies the internal topological structure of bounded plane regions, which could consist of multiple pieces and/or have holes and islands to any finite level. The insufficiency of simple regions (regions homeomorphic to closed disks) to cope with the variety and complexity of spatial entities and phenomena has been widely acknowledged. Another significant drawback of simple regions is that they are not closed under set operations union, intersection, and difference. This paper considers bounded semi-algebraic regions, which are closed under set operations and can closely approximate most plane regions arising in practice.
Title: Reasoning with Topological and Directional Spatial Information
Abstract: Current research on qualitative spatial representation and reasoning mainly focuses on one single aspect of space. In real world applications, however, multiple spatial aspects are often involved simultaneously. This paper investigates problems arising in reasoning with combined topological and directional information. We use the RCC8 algebra and the Rectangle Algebra (RA) for expressing topological and directional information respectively. We give examples to show that the bipath-consistency algorithm BIPATH is incomplete for solving even basic RCC8 and RA constraints. If topological constraints are taken from some maximal tractable subclasses of RCC8, and directional constraints are taken from a subalgebra, termed DIR49, of RA, then we show that BIPATH is able to separate topological constraints from directional ones. This means, given a set of hybrid topological and directional constraints from the above subclasses of RCC8 and RA, we can transfer the joint satisfaction problem in polynomial time to two independent satisfaction problems in RCC8 and RA. For general RA constraints, we give a method to compute solutions that satisfy all topological constraints and approximately satisfy each RA constraint to any prescribed precision.
Title: Reasoning about Cardinal Directions between Extended Objects
Abstract: Direction relations between extended spatial objects are important commonsense knowledge. Recently, Goyal and Egenhofer proposed a formal model, known as Cardinal Direction Calculus (CDC), for representing direction relations between connected plane regions. CDC is perhaps the most expressive qualitative calculus for directional information, and has attracted increasing interest from areas such as artificial intelligence, geographical information science, and image retrieval. Given a network of CDC constraints, the consistency problem is deciding if the network is realizable by connected regions in the real plane. This paper provides a cubic algorithm for checking consistency of basic CDC constraint networks, and proves that reasoning with CDC is in general an NP-Complete problem. For a consistent network of basic CDC constraints, our algorithm also returns a 'canonical' solution in cubic time. This cubic algorithm is also adapted to cope with cardinal directions between possibly disconnected regions, in which case currently the best algorithm is of time complexity O(n^5).
Title: A theory of intelligence: networked problem solving in animal societies
Abstract: A society's single emergent, increasing intelligence arises partly from the thermodynamic advantages of networking the innate intelligence of different individuals, and partly from the accumulation of solved problems. Economic growth is proportional to the square of the network entropy of a society's population times the network entropy of the number of the society's solved problems.
Title: Latin hypercube sampling with inequality constraints
Abstract: In some studies requiring predictive and CPU-time consuming numerical models, the sampling design of the model input variables has to be chosen with caution. For this purpose, Latin hypercube sampling has a long history and has shown its robustness capabilities. In this paper we propose and discuss a new algorithm to build a Latin hypercube sample (LHS) taking into account inequality constraints between the sampled variables. This technique, called constrained Latin hypercube sampling (cLHS), consists in doing permutations on an initial LHS to honor the desired monotonic constraints. The relevance of this approach is shown on a real example concerning the numerical welding simulation, where the inequality constraints are caused by the physical decreasing of some material properties in function of the temperature.
Title: Monte Carlo Methods in Statistics
Abstract: Monte Carlo methods are now an essential part of the statistician's toolbox, to the point of being more familiar to graduate students than the measure theoretic notions upon which they are based! We recall in this note some of the advances made in the design of Monte Carlo techniques towards their use in Statistics, referring to Robert and Casella (2004,2010) for an in-depth coverage.
Title: Rare-Allele Detection Using Compressed Se(que)nsing
Abstract: Detection of rare variants by resequencing is important for the identification of individuals carrying disease variants. Rapid sequencing by new technologies enables low-cost resequencing of target regions, although it is still prohibitive to test more than a few individuals. In order to improve cost trade-offs, it has recently been suggested to apply pooling designs which enable the detection of carriers of rare alleles in groups of individuals. However, this was shown to hold only for a relatively low number of individuals in a pool, and requires the design of pooling schemes for particular cases. We propose a novel pooling design, based on a compressed sensing approach, which is both general, simple and efficient. We model the experimental procedure and show via computer simulations that it enables the recovery of rare allele carriers out of larger groups than were possible before, especially in situations where high coverage is obtained for each individual. Our approach can also be combined with barcoding techniques to enhance performance and provide a feasible solution based on current resequencing costs. For example, when targeting a small enough genomic region ( 100 base-pairs) and using only 10 sequencing lanes and 10 distinct barcodes, one can recover the identity of 4 rare allele carriers out of a population of over 4000 individuals.
Title: Kinematic analysis of a class of analytic planar 3-RPR parallel manipulators
Abstract: A class of analytic planar 3-RPR manipulators is analyzed in this paper. These manipulators have congruent base and moving platforms and the moving platform is rotated of 180 deg about an axis in the plane. The forward kinematics is reduced to the solution of a 3rd-degree polynomial and a quadratic equation in sequence. The singularities are calculated and plotted in the joint space. The second-order singularities (cups points), which play an important role in non-singular change of assembly-mode motions, are also analyzed.
Title: Scale-Based Gaussian Coverings: Combining Intra and Inter Mixture Models in Image Segmentation
Abstract: By a "covering" we mean a Gaussian mixture model fit to observed data. Approximations of the Bayes factor can be availed of to judge model fit to the data within a given Gaussian mixture model. Between families of Gaussian mixture models, we propose the R\'enyi quadratic entropy as an excellent and tractable model comparison framework. We exemplify this using the segmentation of an MRI image volume, based (1) on a direct Gaussian mixture model applied to the marginal distribution function, and (2) Gaussian model fit through k-means applied to the 4D multivalued image volume furnished by the wavelet transform. Visual preference for one model over another is not immediate. The R\'enyi quadratic entropy allows us to show clearly that one of these modelings is superior to the other.
Title: Advances in Feature Selection with Mutual Information
Abstract: The selection of features that are relevant for a prediction or classification problem is an important problem in many domains involving high-dimensional data. Selecting features helps fighting the curse of dimensionality, improving the performances of prediction or classification methods, and interpreting the application. In a nonlinear context, the mutual information is widely used as relevance criterion for features and sets of features. Nevertheless, it suffers from at least three major limitations: mutual information estimators depend on smoothing parameters, there is no theoretically justified stopping criterion in the feature selection greedy procedure, and the estimation itself suffers from the curse of dimensionality. This chapter shows how to deal with these problems. The two first ones are addressed by using resampling techniques that provide a statistical basis to select the estimator parameters and to stop the search procedure. The third one is addressed by modifying the mutual information criterion into a measure of how features are complementary (and not only informative) for the problem at hand.
Title: Median topographic maps for biomedical data sets
Abstract: Median clustering extends popular neural data analysis methods such as the self-organizing map or neural gas to general data structures given by a dissimilarity matrix only. This offers flexible and robust global data inspection methods which are particularly suited for a variety of data as occurs in biomedical domains. In this chapter, we give an overview about median clustering and its properties and extensions, with a particular focus on efficient implementations adapted to large scale data analysis.
Title: On Planning with Preferences in HTN
Abstract: In this paper, we address the problem of generating preferred plans by combining the procedural control knowledge specified by Hierarchical Task Networks (HTNs) with rich qualitative user preferences. The outcome of our work is a language for specifyin user preferences, tailored to HTN planning, together with a provably optimal preference-based planner, HTNPLAN, that is implemented as an extension of SHOP2. To compute preferred plans, we propose an approach based on forward-chaining heuristic search. Our heuristic uses an admissible evaluation function measuring the satisfaction of preferences over partial plans. Our empirical evaluation demonstrates the effectiveness of our HTNPLAN heuristics. We prove our approach sound and optimal with respect to the plans it generates by appealing to a situation calculus semantics of our preference language and of HTN planning. While our implementation builds on SHOP2, the language and techniques proposed here are relevant to a broad range of HTN planners.
Title: Efficient algorithms for training the parameters of hidden Markov models using stochastic expectation maximization EM training and Viterbi training
Abstract: Background: Hidden Markov models are widely employed by numerous bioinformatics programs used today. Applications range widely from comparative gene prediction to time-series analyses of micro-array data. The parameters of the underlying models need to be adjusted for specific data sets, for example the genome of a particular species, in order to maximize the prediction accuracy. Computationally efficient algorithms for parameter training are thus key to maximizing the usability of a wide range of bioinformatics applications. Results: We introduce two computationally efficient training algorithms, one for Viterbi training and one for stochastic expectation maximization (EM) training, which render the memory requirements independent of the sequence length. Unlike the existing algorithms for Viterbi and stochastic EM training which require a two-step procedure, our two new algorithms require only one step and scan the input sequence in only one direction. We also implement these two new algorithms and the already published linear-memory algorithm for EM training into the hidden Markov model compiler HMM-Converter and examine their respective practical merits for three small example models. Conclusions: Bioinformatics applications employing hidden Markov models can use the two algorithms in order to make Viterbi training and stochastic EM training more computationally efficient. Using these algorithms, parameter training can thus be attempted for more complex models and longer training sequences. The two new algorithms have the added advantage of being easier to implement than the corresponding default algorithms for Viterbi training and stochastic EM training.