text
stringlengths 0
4.09k
|
|---|
Abstract: In this paper, the framework of kernel machines with two layers is introduced, generalizing classical kernel methods. The new learning methodology provide a formal connection between computational architectures with multiple layers and the theme of kernel learning in standard regularization methods. First, a representer theorem for two-layer networks is presented, showing that finite linear combinations of kernels on each layer are optimal architectures whenever the corresponding functions solve suitable variational problems in reproducing kernel Hilbert spaces (RKHS). The input-output map expressed by these architectures turns out to be equivalent to a suitable single-layer kernel machines in which the kernel function is also learned from the data. Recently, the so-called multiple kernel learning methods have attracted considerable attention in the machine learning literature. In this paper, multiple kernel learning methods are shown to be specific cases of kernel machines with two layers in which the second layer is linear. Finally, a simple and effective multiple kernel learning method called RLS2 (regularized least squares with two layers) is introduced, and his performances on several learning problems are extensively analyzed. An open source MATLAB toolbox to train and validate RLS2 models with a Graphic User Interface is available.
|
Title: Scalable Bayesian reduced-order models for high-dimensional multiscale dynamical systems
|
Abstract: While existing mathematical descriptions can accurately account for phenomena at microscopic scales (e.g. molecular dynamics), these are often high-dimensional, stochastic and their applicability over macroscopic time scales of physical interest is computationally infeasible or impractical. In complex systems, with limited physical insight on the coherent behavior of their constituents, the only available information is data obtained from simulations of the trajectories of huge numbers of degrees of freedom over microscopic time scales. This paper discusses a Bayesian approach to deriving probabilistic coarse-grained models that simultaneously address the problems of identifying appropriate reduced coordinates and the effective dynamics in this lower-dimensional representation. At the core of the models proposed lie simple, low-dimensional dynamical systems which serve as the building blocks of the global model. These approximate the latent, generating sources and parameterize the reduced-order dynamics. We discuss parallelizable, online inference and learning algorithms that employ Sequential Monte Carlo samplers and scale linearly with the dimensionality of the observed dynamics. We propose a Bayesian adaptive time-integration scheme that utilizes probabilistic predictive estimates and enables rigorous concurrent s imulation over macroscopic time scales. The data-driven perspective advocated assimilates computational and experimental data and thus can materialize data-model fusion. It can deal with applications that lack a mathematical description and where only observational data is available. Furthermore, it makes non-intrusive use of existing computational models.
|
Title: Adaptive Gibbs samplers
|
Abstract: We consider various versions of adaptive Gibbs and Metropolis within-Gibbs samplers, which update their selection probabilities (and perhaps also their proposal distributions) on the fly during a run, by learning as they go in an attempt to optimise the algorithm. We present a cautionary example of how even a simple-seeming adaptive Gibbs sampler may fail to converge. We then present various positive results guaranteeing convergence of adaptive Gibbs samplers under certain conditions.
|
Title: A Monte Carlo Algorithm for Universally Optimal Bayesian Sequence Prediction and Planning
|
Abstract: The aim of this work is to address the question of whether we can in principle design rational decision-making agents or artificial intelligences embedded in computable physics such that their decisions are optimal in reasonable mathematical senses. Recent developments in rare event probability estimation, recursive bayesian inference, neural networks, and probabilistic planning are sufficient to explicitly approximate reinforcement learners of the AIXI style with non-trivial model classes (here, the class of resource-bounded Turing machines). Consideration of the effects of resource limitations in a concrete implementation leads to insights about possible architectures for learning systems using optimal decision makers as components.
|
Title: Introducing Monte Carlo Methods with R Solutions to Odd-Numbered Exercises
|
Abstract: This is the solution manual to the odd-numbered exercises in our book "Introducing Monte Carlo Methods with R", published by Springer Verlag on December 10, 2009, and made freely available to everyone.
|
Title: Asymptotic Learning Curve and Renormalizable Condition in Statistical Learning Theory
|
Abstract: Bayes statistics and statistical physics have the common mathematical structure, where the log likelihood function corresponds to the random Hamiltonian. Recently, it was discovered that the asymptotic learning curves in Bayes estimation are subject to a universal law, even if the log likelihood function can not be approximated by any quadratic form. However, it is left unknown what mathematical property ensures such a universal law. In this paper, we define a renormalizable condition of the statistical estimation problem, and show that, under such a condition, the asymptotic learning curves are ensured to be subject to the universal law, even if the true distribution is unrealizable and singular for a statistical model. Also we study a nonrenormalizable case, in which the learning curves have the different asymptotic behaviors from the universal law.
|
Title: Comment on "Harold Jeffreys's Theory of Probability Revisited"
|
Abstract: Comment on "Harold Jeffreys's Theory of Probability Revisited" [arXiv:0804.3173]
|
Title: Bayes, Jeffreys, Prior Distributions and the Philosophy of Statistics
|
Abstract: Discussion of "Harold Jeffreys's Theory of Probability revisited," by Christian Robert, Nicolas Chopin, and Judith Rousseau, for Statistical Science [arXiv:0804.3173]
|
Title: Comment: The Importance of Jeffreys's Legacy
|
Abstract: Theory of Probability is distinguished by several high-level philosophical attitudes, some stressed by Jeffreys, some implicit. By reviewing these we may recognize the importance in this work in the historical development of statistics. [arXiv:0804.3173]
|
Title: Comment on "Harold Jeffreys's Theory of Probability Revisited"
|
Abstract: Comment on "Harold Jeffreys's Theory of Probability Revisited" [arXiv:0804.3173]
|
Title: Comment on "Harold Jeffreys's Theory of Probability Revisited"
|
Abstract: Comment on "Harold Jeffreys's Theory of Probability Revisited" [arXiv:0804.3173]
|
Title: A Multivariate Variance Components Model for Analysis of Covariance in Designed Experiments
|
Abstract: Traditional methods for covariate adjustment of treatment means in designed experiments are inherently conditional on the observed covariate values. In order to develop a coherent general methodology for analysis of covariance, we propose a multivariate variance components model for the joint distribution of the response and covariates. It is shown that, if the design is orthogonal with respect to (random) blocking factors, then appropriate adjustments to treatment means can be made using the univariate variance components model obtained by conditioning on the observed covariate values. However, it is revealed that some widely used models are incorrectly specified, leading to biased estimates and incorrect standard errors. The approach clarifies some issues that have been the source of ongoing confusion in the statistics literature.
|
Title: Comment on "Harold Jeffreys's Theory of Probability Revisited"
|
Abstract: Comment on "Harold Jeffreys's Theory of Probability Revisited" [arXiv:0804.3173]
|
Title: Feature Extraction for Universal Hypothesis Testing via Rank-constrained Optimization
|
Abstract: This paper concerns the construction of tests for universal hypothesis testing problems, in which the alternate hypothesis is poorly modeled and the observation space is large. The mismatched universal test is a feature-based technique for this purpose. In prior work it is shown that its finite-observation performance can be much better than the (optimal) Hoeffding test, and good performance depends crucially on the choice of features. The contributions of this paper include: 1) We obtain bounds on the number of \epsilon distinguishable distributions in an exponential family. 2) This motivates a new framework for feature extraction, cast as a rank-constrained optimization problem. 3) We obtain a gradient-based algorithm to solve the rank-constrained optimization problem and prove its local convergence.
|
Title: Increasing stability and interpretability of gene expression signatures
|
Abstract: Motivation : Molecular signatures for diagnosis or prognosis estimated from large-scale gene expression data often lack robustness and stability, rendering their biological interpretation challenging. Increasing the signature's interpretability and stability across perturbations of a given dataset and, if possible, across datasets, is urgently needed to ease the discovery of important biological processes and, eventually, new drug targets. Results : We propose a new method to construct signatures with increased stability and easier interpretability. The method uses a gene network as side interpretation and enforces a large connectivity among the genes in the signature, leading to signatures typically made of genes clustered in a few subnetworks. It combines the recently proposed graph Lasso procedure with a stability selection procedure. We evaluate its relevance for the estimation of a prognostic signature in breast cancer, and highlight in particular the increase in interpretability and stability of the signature.
|
Title: An Immuno-Inspired Approach to Misbehavior Detection in Ad Hoc Wireless Networks
|
Abstract: We propose and evaluate an immuno-inspired approach to misbehavior detection in ad hoc wireless networks. Node misbehavior can be the result of an intrusion, or a software or hardware failure. Our approach is motivated by co-stimulatory signals present in the Biological immune system. The results show that co-stimulation in ad hoc wireless networks can both substantially improve energy efficiency of detection and, at the same time, help achieve low false positives rates. The energy efficiency improvement is almost two orders of magnitude, if compared to misbehavior detection based on watchdogs. We provide a characterization of the trade-offs between detection approaches executed by a single node and by several nodes in cooperation. Additionally, we investigate several feature sets for misbehavior detection. These feature sets impose different requirements on the detection system, most notably from the energy efficiency point of view.
|
Title: Bayesian Thought in Early Modern Detective Stories: Monsieur Lecoq, C. Auguste Dupin and Sherlock Holmes
|
Abstract: This paper reviews the maxims used by three early modern fictional detectives: Monsieur Lecoq, C. Auguste Dupin and Sherlock Holmes. It find similarities between these maxims and Bayesian thought. Poe's Dupin uses ideas very similar to Bayesian game theory. Sherlock Holmes' statements also show thought patterns justifiable in Bayesian terms.
|
Title: A Conversation with Shayle R. Searle
|
Abstract: Born in New Zealand, Shayle Robert Searle earned a bachelor's degree (1949) and a master's degree (1950) from Victoria University, Wellington, New Zealand. After working for an actuary, Searle went to Cambridge University where he earned a Diploma in mathematical statistics in 1953. Searle won a Fulbright travel award to Cornell University, where he earned a doctorate in animal breeding, with a strong minor in statistics in 1959, studying under Professor Charles Henderson. In 1962, Cornell invited Searle to work in the university's computing center, and he soon joined the faculty as an assistant professor of biological statistics. He was promoted to associate professor in 1965, and became a professor of biological statistics in 1970. Searle has also been a visiting professor at Texas A&M University, Florida State University, Universit\"at Augsburg and the University of Auckland. He has published several statistics textbooks and has authored more than 165 papers. Searle is a Fellow of the American Statistical Association, the Royal Statistical Society, and he is an elected member of the International Statistical Institute. He also has received the prestigious Alexander von Humboldt U.S. Senior Scientist Award, is an Honorary Fellow of the Royal Society of New Zealand and was recently awarded the D.Sc. Honoris Causa by his alma mater, Victoria University of Wellington, New Zealand.
|
Title: Bayesian inference for queueing networks and modeling of internet services
|
Abstract: Modern Internet services, such as those at Google, Yahoo!, and Amazon, handle billions of requests per day on clusters of thousands of computers. Because these services operate under strict performance requirements, a statistical understanding of their performance is of great practical interest. Such services are modeled by networks of queues, where each queue models one of the computers in the system. A key challenge is that the data are incomplete, because recording detailed information about every request to a heavily used system can require unacceptable overhead. In this paper we develop a Bayesian perspective on queueing models in which the arrival and departure times that are not observed are treated as latent variables. Underlying this viewpoint is the observation that a queueing model defines a deterministic transformation between the data and a set of independent variables called the service times. With this viewpoint in hand, we sample from the posterior distribution over missing data and model parameters using Markov chain Monte Carlo. We evaluate our framework on data from a benchmark Web application. We also present a simple technique for selection among nested queueing models. We are unaware of any previous work that considers inference in networks of queues in the presence of missing data.
|
Title: The dynamics of message passing on dense graphs, with applications to compressed sensing
|
Abstract: Approximate message passing algorithms proved to be extremely effective in reconstructing sparse signals from a small number of incoherent linear measurements. Extensive numerical experiments further showed that their dynamics is accurately tracked by a simple one-dimensional iteration termed state evolution. In this paper we provide the first rigorous foundation to state evolution. We prove that indeed it holds asymptotically in the large system limit for sensing matrices with independent and identically distributed gaussian entries. While our focus is on message passing algorithms for compressed sensing, the analysis extends beyond this setting, to a general class of algorithms on dense graphs. In this context, state evolution plays the role that density evolution has for sparse graphs. The proof technique is fundamentally different from the standard approach to density evolution, in that it copes with large number of short loops in the underlying factor graph. It relies instead on a conditioning technique recently developed by Erwin Bolthausen in the context of spin glass theory.
|
Title: Role of Interestingness Measures in CAR Rule Ordering for Associative Classifier: An Empirical Approach
|
Abstract: Associative Classifier is a novel technique which is the integration of Association Rule Mining and Classification. The difficult task in building Associative Classifier model is the selection of relevant rules from a large number of class association rules (CARs). A very popular method of ordering rules for selection is based on confidence, support and antecedent size (CSA). Other methods are based on hybrid orderings in which CSA method is combined with other measures. In the present work, we study the effect of using different interestingness measures of Association rules in CAR rule ordering and selection for associative classifier.
|
Title: Features Based Text Similarity Detection
|
Abstract: As the Internet help us cross cultural border by providing different information, plagiarism issue is bound to arise. As a result, plagiarism detection becomes more demanding in overcoming this issue. Different plagiarism detection tools have been developed based on various detection techniques. Nowadays, fingerprint matching technique plays an important role in those detection tools. However, in handling some large content articles, there are some weaknesses in fingerprint matching technique especially in space and time consumption issue. In this paper, we propose a new approach to detect plagiarism which integrates the use of fingerprint matching technique with four key features to assist in the detection process. These proposed features are capable to choose the main point or key sentence in the articles to be compared. Those selected sentence will be undergo the fingerprint matching process in order to detect the similarity between the sentences. Hence, time and space usage for the comparison process is reduced without affecting the effectiveness of the plagiarism detection.
|
Title: 3D Skull Recognition Using 3D Matching Technique
|
Abstract: Biometrics has become a "hot" area. Governments are funding research programs focused on biometrics. In this paper the problem of person recognition and verification based on a different biometric application has been addressed. The system is based on the 3DSkull recognition using 3D matching technique, in fact this paper present several bio-metric approaches in order of assign the weak point in term of used the biometric from the authorize person and insure the person who access the data is the real person. The feature of the simulate system shows the capability of using 3D matching system as an efficient way to identify the person through his or her skull by match it with database, this technique grantee fast processing with optimizing the false positive and negative as well .
|
Title: Hybrid Medical Image Classification Using Association Rule Mining with Decision Tree Algorithm
|
Abstract: The main focus of image mining in the proposed method is concerned with the classification of brain tumor in the CT scan brain images. The major steps involved in the system are: pre-processing, feature extraction, association rule mining and hybrid classifier. The pre-processing step has been done using the median filtering process and edge features have been extracted using canny edge detection technique. The two image mining approaches with a hybrid manner have been proposed in this paper. The frequent patterns from the CT scan images are generated by frequent pattern tree (FP-Tree) algorithm that mines the association rules. The decision tree method has been used to classify the medical images for diagnosis. This system enhances the classification process to be more accurate. The hybrid method improves the efficiency of the proposed method than the traditional image mining methods. The experimental result on prediagnosed database of brain images showed 97% sensitivity and 95% accuracy respectively. The physicians can make use of this accurate decision tree classification phase for classifying the brain images into normal, benign and malignant for effective medical diagnosis.
|
Title: Gradient Based Seeded Region Grow method for CT Angiographic Image Segmentation
|
Abstract: Segmentation of medical images using seeded region growing technique is increasingly becoming a popular method because of its ability to involve high-level knowledge of anatomical structures in seed selection process. Region based segmentation of medical images are widely used in varied clinical applications like visualization, bone detection, tumor detection and unsupervised image retrieval in clinical databases. As medical images are mostly fuzzy in nature, segmenting regions based intensity is the most challenging task. In this paper, we discuss about popular seeded region grow methodology used for segmenting anatomical structures in CT Angiography images. We have proposed a gradient based homogeneity criteria to control the region grow process while segmenting CTA images.
|
Title: Estimation in functional regression for general exponential families
|
Abstract: This paper studies a class of exponential family models whose canonical parameters are specified as linear functionals of an unknown infinite-dimensional slope function. The optimal minimax rates of convergence for slope function estimation are established. The estimators that achieve the optimal rates are constructed by constrained maximum likelihood estimation with parameters whose dimension grows with sample size. A change-of-measure argument, inspired by Le Cam's theory of asymptotic equivalence, is used to eliminate the bias caused by the nonlinearity of exponential family models.
|
Title: The effect of discrete vs. continuous-valued ratings on reputation and ranking systems
|
Abstract: When users rate objects, a sophisticated algorithm that takes into account ability or reputation may produce a fairer or more accurate aggregation of ratings than the straightforward arithmetic average. Recently a number of authors have proposed different co-determination algorithms where estimates of user and object reputation are refined iteratively together, permitting accurate measures of both to be derived directly from the rating data. However, simulations demonstrating these methods' efficacy assumed a continuum of rating values, consistent with typical physical modelling practice, whereas in most actual rating systems only a limited range of discrete values (such as a 5-star system) is employed. We perform a comparative test of several co-determination algorithms with different scales of discrete ratings and show that this seemingly minor modification in fact has a significant impact on algorithms' performance. Paradoxically, where rating resolution is low, increased noise in users' ratings may even improve the overall performance of the system.
|
Title: Strict Monotonicity and Convergence Rate of Titterington's Algorithm for Computing D-optimal Designs
|
Abstract: We study a class of multiplicative algorithms introduced by Silvey et al. (1978) for computing D-optimal designs. Strict monotonicity is established for a variant considered by Titterington (1978). A formula for the rate of convergence is also derived. This is used to explain why modifications considered by Titterington (1978) and Dette et al. (2008) usually converge faster.
|
Title: Robustness and accuracy of methods for high dimensional data analysis based on Student's t statistic
|
Abstract: Student's $t$ statistic is finding applications today that were never envisaged when it was introduced more than a century ago. Many of these applications rely on properties, for example robustness against heavy tailed sampling distributions, that were not explicitly considered until relatively recently. In this paper we explore these features of the $t$ statistic in the context of its application to very high dimensional problems, including feature selection and ranking, highly multiple hypothesis testing, and sparse, high dimensional signal detection. Robustness properties of the $t$-ratio are highlighted, and it is established that those properties are preserved under applications of the bootstrap. In particular, bootstrap methods correct for skewness, and therefore lead to second-order accuracy, even in the extreme tails. Indeed, it is shown that the bootstrap, and also the more popular but less accurate $t$-distribution and normal approximations, are more effective in the tails than towards the middle of the distribution. These properties motivate new methods, for example bootstrap-based techniques for signal detection, that confine attention to the significant tail of a statistic.
|
Title: Non-Gaussian Quasi Maximum Likelihood Estimation of GARCH Models
|
Abstract: The non-Gaussian quasi maximum likelihood estimator is frequently used in GARCH models with intension to improve the efficiency of the GARCH parameters. However, unless the quasi-likelihood happens to be the true one, non-Gaussian QMLE methods suffers inconsistency even if shape parameters in the quasi-likelihood are estimated. To correct this bias, we identify an unknown scale parameter that is critical to the consistent estimation of non-Gaussian QMLE, and propose a two-step non-Gaussian QMLE (2SNG-QMLE) for estimation of the scale parameter and GARCH parameters. This novel approach is consistent and asymptotically normal. Moreover, it has higher efficiency than the Gaussian QMLE, particularly when the innovation error has heavy tails. Two extensions are proposed to further improve the efficiency of 2SNG-QMLE. The impact of relative heaviness of tails of the innovation and quasi-likelihood distributions on the asymptotic efficiency has been thoroughly investigated. Monte Carlo simulations and an empirical study confirm the advantages of the proposed approach.
|
Title: Robust and Trend-following Kalman Smoothers using Student's t
|
Abstract: We propose two nonlinear Kalman smoothers that rely on Student's t distributions. The T-Robust smoother finds the maximum a posteriori likelihood (MAP) solution for Gaussian process noise and Student's t observation noise, and is extremely robust against outliers, outperforming the recently proposed l1-Laplace smoother in extreme situations (e.g. 50% or more outliers). The second estimator, which we call the T-Trend smoother, is able to follow sudden changes in the process model, and is derived as a MAP solver for a model with Student's t-process noise and Gaussian observation noise. We design specialized methods to solve both problems which exploit the special structure of the Student's t-distribution, and provide a convergence theory. Both smoothers can be implemented with only minor modifications to an existing L2 smoother implementation. Numerical results for linear and nonlinear models illustrating both robust and fast tracking applications are presented.
|
Title: Classifying Network Data with Deep Kernel Machines
|
Abstract: Inspired by a growing interest in analyzing network data, we study the problem of node classification on graphs, focusing on approaches based on kernel machines. Conventionally, kernel machines are linear classifiers in the implicit feature space. We argue that linear classification in the feature space of kernels commonly used for graphs is often not enough to produce good results. When this is the case, one naturally considers nonlinear classifiers in the feature space. We show that repeating this process produces something we call "deep kernel machines." We provide some examples where deep kernel machines can make a big difference in classification performance, and point out some connections to various recent literature on deep architectures in artificial intelligence and machine learning.
|
Title: Relative Age Effect in Elite Sports: Methodological Bias or Real Discrimination?
|
Abstract: Sport sciences researchers talk about a relative age effect when they observe a biased distribution of elite athletes' birthdates, with an over-representation of those born at the beginning of the competitive year and an under-representation of those born at the end. Using the whole sample of the French male licensed soccer players (n = 1,831,524), our study suggests that there could be an important bias in the statistical test of this effect. This bias could in turn lead to falsely conclude to a systemic discrimination in the recruitment of professional players. Our findings question the accuracy of past results concerning the existence of this effect at the elite level.
|
Title: Grouping Priors and the Bayesian Elastic Net
|
Abstract: In the literature surrounding Bayesian penalized regression, the two primary choices of prior distribution on the regression coefficients are zero-mean Gaussian and Laplace. While both have been compared numerically and theoretically, there remains little guidance on which to use in real-life situations. We propose two viable solutions to this problem in the form of prior distributions which combine and compromise between Laplace and Gaussian priors, respectively. Through cross-validation the prior which optimizes prediction performance is automatically selected. We then demonstrate the improved performance of these new prior distributions relative to Laplace and Gaussian priors in both a simulated and experimental environment.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.