text
stringlengths
0
4.09k
Abstract: It is now well established that sparse signal models are well suited to restoration tasks and can effectively be learned from audio, image, and video data. Recent research has been aimed at learning discriminative sparse models instead of purely reconstructive ones. This paper proposes a new step in that direction, with a novel sparse representation for signals belonging to different classes in terms of a shared dictionary and multiple class-decision functions. The linear variant of the proposed model admits a simple probabilistic interpretation, while its most general variant admits an interpretation in terms of kernels. An optimization framework for learning all the components of the proposed model is presented, along with experimental results on standard handwritten digit and texture classification tasks.
Title: A New Framework of Multistage Hypothesis Tests
Abstract: In this paper, we have established a general framework of multistage hypothesis tests which applies to arbitrarily many mutually exclusive and exhaustive composite hypotheses. Within the new framework, we have constructed specific multistage tests which rigorously control the risk of committing decision errors and are more efficient than previous tests in terms of average sample number and the number of sampling operations. Without truncation, the sample numbers of our testing plans are absolutely bounded.
Title: Kinematic and Dynamic Analyses of the Orthoglide 5-axis
Abstract: This paper deals with the kinematic and dynamic analyses of the Orthoglide 5-axis, a five-degree-of-freedom manipulator. It is derived from two manipulators: i) the Orthoglide 3-axis; a three dof translational manipulator and ii) the Agile eye; a parallel spherical wrist. First, the kinematic and dynamic models of the Orthoglide 5-axis are developed. The geometric and inertial parameters of the manipulator are determined by means of a CAD software. Then, the required motors performances are evaluated for some test trajectories. Finally, the motors are selected in the catalogue from the previous results.
Title: Singularity Analysis of Limited-dof Parallel Manipulators using Grassmann-Cayley Algebra
Abstract: This paper characterizes geometrically the singularities of limited DOF parallel manipulators. The geometric conditions associated with the dependency of six Pl\"ucker vector of lines (finite and infinite) constituting the rows of the inverse Jacobian matrix are formulated using Grassmann-Cayley algebra. Manipulators under consideration do not need to have a passive spherical joint somewhere in each leg. This study is illustrated with three example robots
Title: Framework for Dynamic Evaluation of Muscle Fatigue in Manual Handling Work
Abstract: Muscle fatigue is defined as the point at which the muscle is no longer able to sustain the required force or work output level. The overexertion of muscle force and muscle fatigue can induce acute pain and chronic pain in human body. When muscle fatigue is accumulated, the functional disability can be resulted as musculoskeletal disorders (MSD). There are several posture exposure analysis methods useful for rating the MSD risks, but they are mainly based on static postures. Even in some fatigue evaluation methods, muscle fatigue evaluation is only available for static postures, but not suitable for dynamic working process. Meanwhile, some existing muscle fatigue models based on physiological models cannot be easily used in industrial ergonomic evaluations. The external dynamic load is definitely the most important factor resulting muscle fatigue, thus we propose a new fatigue model under a framework for evaluating fatigue in dynamic working processes. Under this framework, virtual reality system is taken to generate virtual working environment, which can be interacted with the work with haptic interfaces and optical motion capture system. The motion information and load information are collected and further processed to evaluate the overall work load of the worker based on dynamic muscle fatigue models and other work evaluation criterions and to give new information to characterize the penibility of the task in design process.
Title: SINGULAB - A Graphical user Interface for the Singularity Analysis of Parallel Robots based on Grassmann-Cayley Algebra
Abstract: This paper presents SinguLab, a graphical user interface for the singularity analysis of parallel robots. The algorithm is based on Grassmann-Cayley algebra. The proposed tool is interactive and introduces the designer to the singularity analysis performed by this method, showing all the stages along the procedure and eventually showing the solution algebraically and graphically, allowing as well the singularity verification of different robot poses.
Title: A Control Variate Approach for Improving Efficiency of Ensemble Monte Carlo
Abstract: In this paper we present a new approach to control variates for improving computational efficiency of Ensemble Monte Carlo. We present the approach using simulation of paths of a time-dependent nonlinear stochastic equation. The core idea is to extract information at one or more nominal model parameters and use this information to gain estimation efficiency at neighboring parameters. This idea is the basis of a general strategy, called DataBase Monte Carlo (DBMC), for improving efficiency of Monte Carlo. In this paper we describe how this strategy can be implemented using the variance reduction technique of Control Variates (CV). We show that, once an initial setup cost for extracting information is incurred, this approach can lead to significant gains in computational efficiency. The initial setup cost is justified in projects that require a large number of estimations or in those that are to be performed under real-time constraints.
Title: Extended ASP tableaux and rule redundancy in normal logic programs
Abstract: We introduce an extended tableau calculus for answer set programming (ASP). The proof system is based on the ASP tableaux defined in [Gebser&Schaub, ICLP 2006], with an added extension rule. We investigate the power of Extended ASP Tableaux both theoretically and empirically. We study the relationship of Extended ASP Tableaux with the Extended Resolution proof system defined by Tseitin for sets of clauses, and separate Extended ASP Tableaux from ASP Tableaux by giving a polynomial-length proof for a family of normal logic programs P_n for which ASP Tableaux has exponential-length minimal proofs with respect to n. Additionally, Extended ASP Tableaux imply interesting insight into the effect of program simplification on the lengths of proofs in ASP. Closely related to Extended ASP Tableaux, we empirically investigate the effect of redundant rules on the efficiency of ASP solving. To appear in Theory and Practice of Logic Programming (TPLP).
Title: Using descriptive mark-up to formalize translation quality assessment
Abstract: The paper deals with using descriptive mark-up to emphasize translation mistakes. The author postulates the necessity to develop a standard and formal XML-based way of describing translation mistakes. It is considered to be important for achieving impersonal translation quality assessment. Marked-up translations can be used in corpus translation studies; moreover, automatic translation assessment based on marked-up mistakes is possible. The paper concludes with setting up guidelines for further activity within the described field.
Title: Generalized Prediction Intervals for Arbitrary Distributed High-Dimensional Data
Abstract: This paper generalizes the traditional statistical concept of prediction intervals for arbitrary probability density functions in high-dimensional feature spaces by introducing significance level distributions, which provides interval-independent probabilities for continuous random variables. The advantage of the transformation of a probability density function into a significance level distribution is that it enables one-class classification or outlier detection in a direct manner.
Title: Finding rare objects and building pure samples: Probabilistic quasar classification from low resolution Gaia spectra
Abstract: We develop and demonstrate a probabilistic method for classifying rare objects in surveys with the particular goal of building very pure samples. It works by modifying the output probabilities from a classifier so as to accommodate our expectation (priors) concerning the relative frequencies of different classes of objects. We demonstrate our method using the Discrete Source Classifier, a supervised classifier currently based on Support Vector Machines, which we are developing in preparation for the Gaia data analysis. DSC classifies objects using their very low resolution optical spectra. We look in detail at the problem of quasar classification, because identification of a pure quasar sample is necessary to define the Gaia astrometric reference frame. By varying a posterior probability threshold in DSC we can trade off sample completeness and contamination. We show, using our simulated data, that it is possible to achieve a pure sample of quasars (upper limit on contamination of 1 in 40,000) with a completeness of 65% at magnitudes of G=18.5, and 50% at G=20.0, even when quasars have a frequency of only 1 in every 2000 objects. The star sample completeness is simultaneously 99% with a contamination of 0.7%. Including parallax and proper motion in the classifier barely changes the results. We further show that not accounting for class priors in the target population leads to serious misclassifications and poor predictions for sample completeness and contamination. (Truncated)
Title: Changing Assembly Modes without Passing Parallel Singularities in Non-Cuspidal 3-RR Planar Parallel Robots
Abstract: This paper demonstrates that any general 3-DOF three-legged planar parallel robot with extensible legs can change assembly modes without passing through parallel singularities (configurations where the mobile platform loses its stiffness). While the results are purely theoretical, this paper questions the very definition of parallel singularities.
Title: Robust Near-Isometric Matching via Structured Learning of Graphical Models
Abstract: Models for near-rigid shape matching are typically based on distance-related features, in order to infer matches that are consistent with the isometric assumption. However, real shapes from image datasets, even when expected to be related by "almost isometric" transformations, are actually subject not only to noise but also, to some limited degree, to variations in appearance and scale. In this paper, we introduce a graphical model that parameterises appearance, distance, and angle features and we learn all of the involved parameters via structured prediction. The outcome is a model for near-rigid shape matching which is robust in the sense that it is able to capture the possibly limited but still important scale and appearance variations. Our experimental results reveal substantial improvements upon recent successful models, while maintaining similar running times.
Title: Hierarchical Bayesian sparse image reconstruction with application to MRFM
Abstract: This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g. by maximizing the estimated posterior distribution. In our fully Bayesian approach the posteriors of all the parameters are available. Thus our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of our hierarchical Bayesian sparse reconstruction method is illustrated on synthetic and real data collected from a tobacco virus sample using a prototype MRFM instrument.
Title: Modeling and Control with Local Linearizing Nadaraya Watson Regression
Abstract: Black box models of technical systems are purely descriptive. They do not explain why a system works the way it does. Thus, black box models are insufficient for some problems. But there are numerous applications, for example, in control engineering, for which a black box model is absolutely sufficient. In this article, we describe a general stochastic framework with which such models can be built easily and fully automated by observation. Furthermore, we give a practical example and show how this framework can be used to model and control a motorcar powertrain.
Title: Survival tree and meld to predict long term survival in liver transplantation waiting list
Abstract: Background: Many authors have described MELD as a predictor of short-term mortality in the liver transplantation waiting list. However MELD score accuracy to predict long term mortality has not been statistically evaluated. Objective: The aim of this study is to analyze the MELD score as well as other variables as a predictor of long-term mortality using a new model: the Survival Tree analysis. Study Design and Setting: The variables obtained at the time of liver transplantation list enrollment and considered in this study are: sex, age, blood type, body mass index, etiology of liver disease, hepatocellular carcinoma, waiting time for transplant and MELD. Mortality on the waiting list is the outcome. Exclusion, transplantation or still in the transplantation list at the end of the study are censored data. Results: The graphical representation of the survival trees showed that the most statistically significant cut off is related to MELD score at point 16. Conclusion: The results are compatible with the cut off point of MELD indicated in the clinical literature.
Title: Clustering of discretely observed diffusion processes
Abstract: In this paper a new dissimilarity measure to identify groups of assets dynamics is proposed. The underlying generating process is assumed to be a diffusion process solution of stochastic differential equations and observed at discrete time. The mesh of observations is not required to shrink to zero. As distance between two observed paths, the quadratic distance of the corresponding estimated Markov operators is considered. Analysis of both synthetic data and real financial data from NYSE/NASDAQ stocks, give evidence that this distance seems capable to catch differences in both the drift and diffusion coefficients contrary to other commonly used metrics.
Title: Improved Sequential Stopping Rule for Monte Carlo Simulation
Abstract: This paper presents an improved result on the negative-binomial Monte Carlo technique analyzed in a previous paper for the estimation of an unknown probability p. Specifically, the confidence level associated to a relative interval [p/\mu_2, p\mu_1], with \mu_1, \mu_2 > 1, is proved to exceed its asymptotic value for a broader range of intervals than that given in the referred paper, and for any value of p. This extends the applicability of the estimator, relaxing the conditions that guarantee a given confidence level.
Title: Learning Hidden Markov Models using Non-Negative Matrix Factorization
Abstract: The Baum-Welsh algorithm together with its derivatives and variations has been the main technique for learning Hidden Markov Models (HMM) from observational data. We present an HMM learning algorithm based on the non-negative matrix factorization (NMF) of higher order Markovian statistics that is structurally different from the Baum-Welsh and its associated approaches. The described algorithm supports estimation of the number of recurrent states of an HMM and iterates the non-negative matrix factorization (NMF) algorithm to improve the learned HMM parameters. Numerical examples are provided as well.
Title: Non-linear regression models for Approximate Bayesian Computation
Abstract: Approximate Bayesian inference on the basis of summary statistics is well-suited to complex problems for which the likelihood is either mathematically or computationally intractable. However the methods that use rejection suffer from the curse of dimensionality when the number of summary statistics is increased. Here we propose a machine-learning approach to the estimation of the posterior density by introducing two innovations. The new method fits a nonlinear conditional heteroscedastic regression of the parameter on the summary statistics, and then adaptively improves estimation using importance sampling. The new algorithm is compared to the state-of-the-art approximate Bayesian methods, and achieves considerable reduction of the computational burden in two examples of inference in statistical genetics and in a queueing model.
Title: Sampling from Dirichlet populations: estimating the number of species
Abstract: Consider the random Dirichlet partition of the interval into $n$ fragments with parameter $\theta >0$. We recall the unordered Ewens sampling formulae from finite Dirichlet partitions. As this is a key variable for estimation purposes, focus is on the number of distinct visited species in the sampling process. These are illustrated in specific cases. We use these preliminary statistical results on frequencies distribution to address the following sampling problem: what is the estimated number of species when sampling is from Dirichlet populations? The obtained results are in accordance with the ones found in sampling theory from random proportions with Poisson-Dirichlet distribution. To conclude with, we apply the different estimators suggested to two different sets of real data.
Title: Tests for zero-inflation and overdispersion
Abstract: We propose a new methodology to detect zero-inflation and overdispersion based on the comparison of the expected sample extremes among convexly ordered distributions. The method is very flexible and includes tests for the proportion of structural zeros in zero-inflated models, tests to distinguish between two ordered parametric families and a new general test to detect overdispersion. The performance of the proposed tests is evaluated via some simulation studies. For the well-known fetal lamb data, we conclude that the zero-inflated Poisson model should be rejected against other more disperse models, but we cannot reject the negative binomial model.
Title: A multivariate phase distribution and its estimation
Abstract: Circular variables such as phase or orientation have received considerable attention throughout the scientific and engineering communities and have recently been quite prominent in the field of neuroscience. While many analytic techniques have used phase as an effective representation, there has been little work on techniques that capture the joint statistics of multiple phase variables. In this paper we introduce a distribution that captures empirically observed pair-wise phase relationships. Importantly, we have developed a computationally efficient and accurate technique for estimating the parameters of this distribution from data. We show that the algorithm performs well in high-dimensions (d=100), and in cases with limited data (as few as 100 samples per dimension). We also demonstrate how this technique can be applied to electrocorticography (ECoG) recordings to investigate the coupling of brain areas during different behavioral states. This distribution and estimation technique can be broadly applied to any setting that produces multiple circular variables.
Title: Audio Classification from Time-Frequency Texture
Abstract: Time-frequency representations of audio signals often resemble texture images. This paper derives a simple audio classification algorithm based on treating sound spectrograms as texture images. The algorithm is inspired by an earlier visual classification scheme particularly efficient at classifying textures. While solely based on time-frequency texture features, the algorithm achieves surprisingly good performance in musical instrument classification experiments.
Title: Mining Meaning from Wikipedia
Abstract: Wikipedia is a goldmine of information; not just for its many readers, but also for the growing community of researchers who recognize it as a resource of exceptional scale and utility. It represents a vast investment of manual effort and judgment: a huge, constantly evolving tapestry of concepts and relations that is being applied to a host of tasks. This article provides a comprehensive description of this work. It focuses on research that extracts and makes use of the concepts, relations, facts and descriptions found in Wikipedia, and organizes the work into four broad categories: applying Wikipedia to natural language processing; using it to facilitate information retrieval and information extraction; and as a resource for ontology building. The article addresses how Wikipedia is being used as is, how it is being improved and adapted, and how it is being combined with other structures to create entirely new resources. We identify the research groups and individuals involved, and how their work has developed in the last few years. We provide a comprehensive list of the open-source software they have produced.
Title: Achieving compositionality of the stable model semantics for Smodels programs
Abstract: In this paper, a Gaifman-Shapiro-style module architecture is tailored to the case of Smodels programs under the stable model semantics. The composition of Smodels program modules is suitably limited by module conditions which ensure the compatibility of the module system with stable models. Hence the semantics of an entire Smodels program depends directly on stable models assigned to its modules. This result is formalized as a module theorem which truly strengthens Lifschitz and Turner's splitting-set theorem for the class of Smodels programs. To streamline generalizations in the future, the module theorem is first proved for normal programs and then extended to cover Smodels programs using a translation from the latter class of programs to the former class. Moreover, the respective notion of module-level equivalence, namely modular equivalence, is shown to be a proper congruence relation: it is preserved under substitutions of modules that are modularly equivalent. Principles for program decomposition are also addressed. The strongly connected components of the respective dependency graph can be exploited in order to extract a module structure when there is no explicit a priori knowledge about the modules of a program. The paper includes a practical demonstration of tools that have been developed for automated (de)composition of Smodels programs. To appear in Theory and Practice of Logic Programming.
Title: Solving the 100 Swiss Francs Problem
Abstract: Sturmfels offered 100 Swiss Francs in 2005 to a conjecture, which deals with a special case of the maximum likelihood estimation for a latent class model. This paper confirms the conjecture positively.
Title: Surrogate Learning - An Approach for Semi-Supervised Classification
Abstract: We consider the task of learning a classifier from the feature space $$ to the set of classes $ = \0, 1\$, when the features can be partitioned into class-conditionally independent feature sets $_1$ and $_2$. We show the surprising fact that the class-conditional independence can be used to represent the original learning task in terms of 1) learning a classifier from $_2$ to $_1$ and 2) learning the class-conditional distribution of the feature set $_1$. This fact can be exploited for semi-supervised learning because the former task can be accomplished purely from unlabeled samples. We present experimental evaluation of the idea in two real world applications.
Title: Smooth supersaturated models
Abstract: In areas such as kernel smoothing and non-parametric regression there is emphasis on smooth interpolation and smooth statistical models. Splines are known to have optimal smoothness properties in one and higher dimensions. It is shown, with special attention to polynomial models, that smooth interpolators can be constructed by first extending the monomial basis and then minimising a measure of smoothness with respect to the free parameters in the extended basis. Algebraic methods are a help in choosing the extended basis which can also be found as a saturated basis for an extended experimental design with dummy design points. One can get arbitrarily close to optimal smoothing for any dimension and over any region, giving a simple alternative models of spline type. The relationship to splines is shown in one and two dimensions. A case study is given which includes benchmarking against kriging methods.
Title: A Computational Study on Emotions and Temperament in Multi-Agent Systems
Abstract: Recent advances in neurosciences and psychology have provided evidence that affective phenomena pervade intelligence at many levels, being inseparable from the cognitionaction loop. Perception, attention, memory, learning, decisionmaking, adaptation, communication and social interaction are some of the aspects influenced by them. This work draws its inspirations from neurobiology, psychophysics and sociology to approach the problem of building autonomous robots capable of interacting with each other and building strategies based on temperamental decision mechanism. Modelling emotions is a relatively recent focus in artificial intelligence and cognitive modelling. Such models can ideally inform our understanding of human behavior. We may see the development of computational models of emotion as a core research focus that will facilitate advances in the large array of computational systems that model, interpret or influence human behavior. We propose a model based on a scalable, flexible and modular approach to emotion which allows runtime evaluation between emotional quality and performance. The results achieved showed that the strategies based on temperamental decision mechanism strongly influence the system performance and there are evident dependency between emotional state of the agents and their temperamental type, as well as the dependency between the team performance and the temperamental configuration of the team members, and this enable us to conclude that the modular approach to emotional programming based on temperamental theory is the good choice to develop computational mind models for emotional behavioral Multi-Agent systems.
Title: An Information Geometric Framework for Dimensionality Reduction
Abstract: This report concerns the problem of dimensionality reduction through information geometric methods on statistical manifolds. While there has been considerable work recently presented regarding dimensionality reduction for the purposes of learning tasks such as classification, clustering, and visualization, these methods have focused primarily on Riemannian manifolds in Euclidean space. While sufficient for many applications, there are many high-dimensional signals which have no straightforward and meaningful Euclidean representation. In these cases, signals may be more appropriately represented as a realization of some distribution lying on a statistical manifold, or a manifold of probability density functions (PDFs). We present a framework for dimensionality reduction that uses information geometry for both statistical manifold reconstruction as well as dimensionality reduction in the data domain.
Title: Multi-Armed Bandits in Metric Spaces
Abstract: In a multi-armed bandit problem, an online algorithm chooses from a set of strategies in a sequence of trials so as to maximize the total payoff of the chosen strategies. While the performance of bandit algorithms with a small finite strategy set is quite well understood, bandit problems with large strategy sets are still a topic of very active investigation, motivated by practical applications such as online auctions and web advertisement. The goal of such research is to identify broad and natural classes of strategy sets and payoff functions which enable the design of efficient solutions. In this work we study a very general setting for the multi-armed bandit problem in which the strategies form a metric space, and the payoff function satisfies a Lipschitz condition with respect to the metric. We refer to this problem as the "Lipschitz MAB problem". We present a complete solution for the multi-armed problem in this setting. That is, for every metric space (L,X) we define an isometry invariant which bounds from below the performance of Lipschitz MAB algorithms for X, and we present an algorithm which comes arbitrarily close to meeting this bound. Furthermore, our technique gives even better results for benign payoff functions.
Title: Thresholded Basis Pursuit: An LP Algorithm for Achieving Optimal Support Recovery for Sparse and Approximately Sparse Signals from Noisy Random Measurements
Abstract: In this paper we present a linear programming solution for sign pattern recovery of a sparse signal from noisy random projections of the signal. We consider two types of noise models, input noise, where noise enters before the random projection; and output noise, where noise enters after the random projection. Sign pattern recovery involves the estimation of sign pattern of a sparse signal. Our idea is to pretend that no noise exists and solve the noiseless $\ell_1$ problem, namely, $\min \|\beta\|_1 s.t. y=G \beta$ and quantizing the resulting solution. We show that the quantized solution perfectly reconstructs the sign pattern of a sufficiently sparse signal. Specifically, we show that the sign pattern of an arbitrary k-sparse, n-dimensional signal $x$ can be recovered with $SNR=\Omega(\log n)$ and measurements scaling as $m= \Omega(k )$ for all sparsity levels $k$ satisfying $0< k \leq \alpha n$, where $\alpha$ is a sufficiently small positive constant. Surprisingly, this bound matches the optimal performance bounds in terms of $SNR$, required number of measurements, and admissible sparsity level in an order-wise sense. In contrast to our results, previous results based on LASSO and Max-Correlation techniques either assume significantly larger $SNR$, sublinear sparsity levels or restrictive assumptions on signal sets. Our proof technique is based on noisy perturbation of the noiseless $\ell_1$ problem, in that, we estimate the maximum admissible noise level before sign pattern recovery fails.
Title: Optimal experimental designs for inverse quadratic regression models
Abstract: In this paper optimal experimental designs for inverse quadratic regression models are determined. We consider two different parameterizations of the model and investigate local optimal designs with respect to the $c$-, $D$- and $E$-criteria, which reflect various aspects of the precision of the maximum likelihood estimator for the parameters in inverse quadratic regression models. In particular it is demonstrated that for a sufficiently large design space geometric allocation rules are optimal with respect to many optimality criteria. Moreover, in numerous cases the designs with respect to the different criteria are supported at the same points. Finally, the efficiencies of different optimal designs with respect to various optimality criteria are studied, and the efficiency of some commonly used designs are investigated.