text
stringlengths
0
4.09k
Title: SparseCodePicking: feature extraction in mass spectrometry using sparse coding algorithms
Abstract: Mass spectrometry (MS) is an important technique for chemical profiling which calculates for a sample a high dimensional histogram-like spectrum. A crucial step of MS data processing is the peak picking which selects peaks containing information about molecules with high concentrations which are of interest in an MS investigation. We present a new procedure of the peak picking based on a sparse coding algorithm. Given a set of spectra of different classes, i.e. with different positions and heights of the peaks, this procedure can extract peaks by means of unsupervised learning. Instead of an $l_1$-regularization penalty term used in the original sparse coding algorithm we propose using an elastic-net penalty term for better regularization. The evaluation is done by means of simulation. We show that for a large region of parameters the proposed peak picking method based on the sparse coding features outperforms a mean spectrum-based method. Moreover, we demonstrate the procedure applying it to two real-life datasets.
Title: A Numerical Approach to Performance Analysis of Quickest Change-Point Detection Procedures
Abstract: For the most popular sequential change detection rules such as CUSUM, EWMA, and the Shiryaev-Roberts test, we develop integral equations and a concise numerical method to compute a number of performance metrics, including average detection delay and average time to false alarm. We pay special attention to the Shiryaev-Roberts procedure and evaluate its performance for various initialization strategies. Regarding the randomized initialization variant proposed by Pollak, known to be asymptotically optimal of order-3, we offer a means for numerically computing the quasi-stationary distribution of the Shiryaev-Roberts statistic that is the distribution of the initializing random variable, thus making this test applicable in practice. A significant side-product of our computational technique is the observation that deterministic initializations of the Shiryaev-Roberts procedure can also enjoy the same order-3 optimality property as Pollak's randomized test and, after careful selection, even uniformly outperform it.
Title: Message Passing Algorithms for Compressed Sensing
Abstract: Compressed sensing aims to undersample certain high-dimensional signals, yet accurately reconstruct them by exploiting signal characteristics. Accurate reconstruction is possible when the object to be recovered is sufficiently sparse in a known basis. Currently, the best known sparsity-undersampling tradeoff is achieved when reconstructing by convex optimization -- which is expensive in important large-scale applications. Fast iterative thresholding algorithms have been intensively studied as alternatives to convex optimization for large-scale problems. Unfortunately known fast algorithms offer substantially worse sparsity-undersampling tradeoffs than convex optimization. We introduce a simple costless modification to iterative thresholding making the sparsity-undersampling tradeoff of the new algorithms equivalent to that of the corresponding convex optimization procedures. The new iterative-thresholding algorithms are inspired by belief propagation in graphical models. Our empirical measurements of the sparsity-undersampling tradeoff for the new algorithms agree with theoretical calculations. We show that a state evolution formalism correctly derives the true sparsity-undersampling tradeoff. There is a surprising agreement between earlier calculations based on random convex polytopes and this new, apparently very different theoretical formalism.
Title: Image Sampling with Quasicrystals
Abstract: We investigate the use of quasicrystals in image sampling. Quasicrystals produce space-filling, non-periodic point sets that are uniformly discrete and relatively dense, thereby ensuring the sample sites are evenly spread out throughout the sampled image. Their self-similar structure can be attractive for creating sampling patterns endowed with a decorative symmetry. We present a brief general overview of the algebraic theory of cut-and-project quasicrystals based on the geometry of the golden ratio. To assess the practical utility of quasicrystal sampling, we evaluate the visual effects of a variety of non-adaptive image sampling strategies on photorealistic image reconstruction and non-photorealistic image rendering used in multiresolution image representations. For computer visualization of point sets used in image sampling, we introduce a mosaic rendering technique.
Title: Empirical Bernstein Bounds and Sample Variance Penalization
Abstract: We give improved constants for data dependent and variance sensitive confidence bounds, called empirical Bernstein bounds, and extend these inequalities to hold uniformly over classes of functionswhose growth function is polynomial in the sample size n. The bounds lead us to consider sample variance penalization, a novel learning method which takes into account the empirical variance of the loss function. We give conditions under which sample variance penalization is effective. In particular, we present a bound on the excess risk incurred by the method. Using this, we argue that there are situations in which the excess risk of our method is of order 1/n, while the excess risk of empirical risk minimization is of order 1/sqrt/n. We show some experimental results, which confirm the theory. Finally, we discuss the potential application of our results to sample compression schemes.
Title: Un syst\`eme modulaire d'acquisition automatique de traductions \`a partir du Web
Abstract: We present a method of automatic translation (French/English) of Complex Lexical Units (CLU) for aiming at extracting a bilingual lexicon. Our modular system is based on linguistic properties (compositionality, polysemy, etc.). Different aspects of the multilingual Web are used to validate candidate translations and collect new terms. We first build a French corpus of Web pages to collect CLU. Three adapted processing stages are applied for each linguistic property : compositional and non polysemous translations, compositional polysemous translations and non compositional translations. Our evaluation on a sample of CLU shows that our technique based on the Web can reach a very high precision.
Title: Self-adaptive web intrusion detection system
Abstract: The evolution of the web server contents and the emergence of new kinds of intrusions make necessary the adaptation of the intrusion detection systems (IDS). Nowadays, the adaptation of the IDS requires manual -- tedious and unreactive -- actions from system administrators. In this paper, we present a self-adaptive intrusion detection system which relies on a set of local model-based diagnosers. The redundancy of diagnoses is exploited, online, by a meta-diagnoser to check the consistency of computed partial diagnoses, and to trigger the adaptation of defective diagnoser models (or signatures) in case of inconsistency. This system is applied to the intrusion detection from a stream of HTTP requests. Our results show that our system 1) detects intrusion occurrences sensitively and precisely, 2) accurately self-adapts diagnoser model, thus improving its detection accuracy.
Title: Gamma-based clustering via ordered means with application to gene-expression analysis
Abstract: Discrete mixture models provide a well-known basis for effective clustering algorithms, although technical challenges have limited their scope. In the context of gene-expression data analysis, a model is presented that mixes over a finite catalog of structures, each one representing equality and inequality constraints among latent expected values. Computations depend on the probability that independent gamma-distributed variables attain each of their possible orderings. Each ordering event is equivalent to an event in independent negative-binomial random variables, and this finding guides a dynamic-programming calculation. The structuring of mixture-model components according to constraints among latent means leads to strict concavity of the mixture log likelihood. In addition to its beneficial numerical properties, the clustering method shows promising results in an empirical study.
Title: Artificial Dendritic Cells: Multi-faceted Perspectives
Abstract: Dendritic cells are the crime scene investigators of the human immune system. Their function is to correlate potentially anomalous invading entities with observed damage to the body. The detection of such invaders by dendritic cells results in the activation of the adaptive immune system, eventually leading to the removal of the invader from the host body. This mechanism has provided inspiration for the development of a novel bio-inspired algorithm, the Dendritic Cell Algorithm. This algorithm processes information at multiple levels of resolution, resulting in the creation of information granules of variable structure. In this chapter we examine the multi-faceted nature of immunology and how research in this field has shaped the function of the resulting Dendritic Cell Algorithm. A brief overview of the algorithm is given in combination with the details of the processes used for its development. The chapter is concluded with a discussion of the parallels between our understanding of the human immune system and how such knowledge influences the design of artificial immune systems.
Title: The Utility of Reliability and Survival
Abstract: Reliability (survival analysis, to biostatisticians) is a key ingredient for mak- ing decisions that mitigate the risk of failure. The other key ingredient is utility. A decision theoretic framework harnesses the two, but to invoke this framework we must distinguish between chance and probability. We describe a functional form for the utility of chance that incorporates all dispositions to risk, and pro- pose a probability of choice model for eliciting this utility. To implement the model a subject is asked to make a series of binary choices between gambles and certainty. These choices endow a statistical character to the problem of utility elicitation. The workings of our approach are illustrated via a live example in- volving a military planner. The material is general because it is germane to any situation involving the valuation of chance.
Title: Contextual Bandits with Similarity Information
Abstract: In a multi-armed bandit (MAB) problem, an online algorithm makes a sequence of choices. In each round it chooses from a time-invariant set of alternatives and receives the payoff associated with this alternative. While the case of small strategy sets is by now well-understood, a lot of recent work has focused on MAB problems with exponentially or infinitely large strategy sets, where one needs to assume extra structure in order to make the problem tractable. In particular, recent literature considered information on similarity between arms. We consider similarity information in the setting of "contextual bandits", a natural extension of the basic MAB problem where before each round an algorithm is given the "context" -- a hint about the payoffs in this round. Contextual bandits are directly motivated by placing advertisements on webpages, one of the crucial problems in sponsored search. A particularly simple way to represent similarity information in the contextual bandit setting is via a "similarity distance" between the context-arm pairs which gives an upper bound on the difference between the respective expected payoffs. Prior work on contextual bandits with similarity uses "uniform" partitions of the similarity space, which is potentially wasteful. We design more efficient algorithms that are based on adaptive partitions adjusted to "popular" context and "high-payoff" arms.
Title: Simulation of truncated normal variables
Abstract: We provide in this paper simulation algorithms for one-sided and two-sided truncated normal distributions. These algorithms are then used to simulate multivariate normal variables with restricted parameter space for any covariance structure.
Title: Simulating Events of Unknown Probabilities via Reverse Time Martingales
Abstract: Assume that one aims to simulate an event of unknown probability $s\in (0,1)$ which is uniquely determined, however only its approximations can be obtained using a finite computational effort. Such settings are often encountered in statistical simulations. We consider two specific examples. First, the exact simulation of non-linear diffusions, second, the celebrated Bernoulli factory problem of generating an $f(p)-$coin given a sequence $X_1,X_2,...$ of independent tosses of a $p-$coin (with known $f$ and unknown $p$). We describe a general framework and provide algorithms where this kind of problems can be fitted and solved. The algorithms are straightforward to implement and thus allow for effective simulation of desired events of probability $s.$ In the case of diffusions, we obtain the algorithm of as a specific instance of the generic framework developed here. In the case of the Bernoulli factory, our work offers a statistical understanding of the Nacu-Peres algorithm for $f(p) = \min\2p, 1-2\varepsilon\$ (which is central to the general question) and allows for its immediate implementation that avoids algorithmic difficulties of the original version.
Title: A fifth order expansion for the distribution function of the maximum likelihood estimator
Abstract: In this paper, expansions for the maximum likelihood estimator of location and its distribution funtion are extended to fifth order. Since the proofs are straightforward extentions of proofs given in earlier papers for orders less than the fifth, they are not given here. The purpose of the paper is mainly to present the higher order expansions.
Title: Beyond Turing Machines
Abstract: This paper discusses "computational" systems capable of "computing" functions not computable by predefined Turing machines if the systems are not isolated from their environment. Roughly speaking, these systems can change their finite descriptions by interacting with their environment.
Title: Relativized hyperequivalence of logic programs for modular programming
Abstract: A recent framework of relativized hyperequivalence of programs offers a unifying generalization of strong and uniform equivalence. It seems to be especially well suited for applications in program optimization and modular programming due to its flexibility that allows us to restrict, independently of each other, the head and body alphabets in context programs. We study relativized hyperequivalence for the three semantics of logic programs given by stable, supported and supported minimal models. For each semantics, we identify four types of contexts, depending on whether the head and body alphabets are given directly or as the complement of a given set. Hyperequivalence relative to contexts where the head and body alphabets are specified directly has been studied before. In this paper, we establish the complexity of deciding relativized hyperequivalence with respect to the three other types of context programs. To appear in Theory and Practice of Logic Programming (TPLP).
Title: Notes on Using Control Variates for Estimation with Reversible MCMC Samplers
Abstract: A general methodology is presented for the construction and effective use of control variates for reversible MCMC samplers. The values of the coefficients of the optimal linear combination of the control variates are computed, and adaptive, consistent MCMC estimators are derived for these optimal coefficients. All methodological and asymptotic arguments are rigorously justified. Numerous MCMC simulation examples from Bayesian inference applications demonstrate that the resulting variance reduction can be quite dramatic.
Title: Learning Object Location Predictors with Boosting and Grammar-Guided Feature Extraction
Abstract: We present BEAMER: a new spatially exploitative approach to learning object detectors which shows excellent results when applied to the task of detecting objects in greyscale aerial imagery in the presence of ambiguous and noisy data. There are four main contributions used to produce these results. First, we introduce a grammar-guided feature extraction system, enabling the exploration of a richer feature space while constraining the features to a useful subset. This is specified with a rule-based generative grammar crafted by a human expert. Second, we learn a classifier on this data using a newly proposed variant of AdaBoost which takes into account the spatially correlated nature of the data. Third, we perform another round of training to optimize the method of converting the pixel classifications generated by boosting into a high quality set of (x, y) locations. Lastly, we carefully define three common problems in object detection and define two evaluation criteria that are tightly matched to these problems. Major strengths of this approach are: (1) a way of randomly searching a broad feature space, (2) its performance when evaluated on well-matched evaluation criteria, and (3) its use of the location prediction domain to learn object detectors as well as to generate detections that perform well on several tasks: object counting, tracking, and target detection. We demonstrate the efficacy of BEAMER with a comprehensive experimental evaluation on a challenging data set.
Title: Precision Measurements of the Cluster Red Sequence using an Error Corrected Gaussian Mixture Model
Abstract: The red sequence is an important feature of galaxy clusters and plays a crucial role in optical cluster detection. Measurement of the slope and scatter of the red sequence are affected both by selection of red sequence galaxies and measurement errors. In this paper, we describe a new error corrected Gaussian Mixture Model for red sequence galaxy identification. Using this technique, we can remove the effects of measurement error and extract unbiased information about the intrinsic properties of the red sequence. We use this method to select red sequence galaxies in each of the 13,823 clusters in the maxBCG catalog, and measure the red sequence ridgeline location and scatter of each. These measurements provide precise constraints on the variation of the average red galaxy populations in the observed frame with redshift. We find that the scatter of the red sequence ridgeline increases mildly with redshift, and that the slope decreases with redshift. We also observe that the slope does not strongly depend on cluster richness. Using similar methods, we show that this behavior is mirrored in a spectroscopic sample of field galaxies, further emphasizing that ridgeline properties are independent of environment.
Title: The Cost of Stability in Coalitional Games
Abstract: A key question in cooperative game theory is that of coalitional stability, usually captured by the notion of the --the set of outcomes such that no subgroup of players has an incentive to deviate. However, some coalitional games have empty cores, and any outcome in such a game is unstable. In this paper, we investigate the possibility of stabilizing a coalitional game by using external payments. We consider a scenario where an external party, which is interested in having the players work together, offers a supplemental payment to the grand coalition (or, more generally, a particular coalition structure). This payment is conditional on players not deviating from their coalition(s). The sum of this payment plus the actual gains of the coalition(s) may then be divided among the agents so as to promote stability. We define the as the minimal external payment that stabilizes the game. We provide general bounds on the cost of stability in several classes of games, and explore its algorithmic properties. To develop a better intuition for the concepts we introduce, we provide a detailed algorithmic study of the cost of stability in weighted voting games, a simple but expressive class of games which can model decision-making in political bodies, and cooperation in multiagent settings. Finally, we extend our model and results to games with coalition structures.
Title: Graphical Probabilistic Routing Model for OBS Networks with Realistic Traffic Scenario
Abstract: Burst contention is a well-known challenging problem in Optical Burst Switching (OBS) networks. Contention resolution approaches are always reactive and attempt to minimize the BLR based on local information available at the core node. On the other hand, a proactive approach that avoids burst losses before they occur is desirable. To reduce the probability of burst contention, a more robust routing algorithm than the shortest path is needed. This paper proposes a new routing mechanism for JET-based OBS networks, called Graphical Probabilistic Routing Model (GPRM) that selects less utilized links, on a hop-by-hop basis by using a bayesian network. We assume no wavelength conversion and no buffering to be available at the core nodes of the OBS network. We simulate the proposed approach under dynamic load to demonstrate that it reduces the Burst Loss Ratio (BLR) compared to static approaches by using Network Simulator 2 (ns-2) on NSFnet network topology and with realistic traffic matrix. Simulation results clearly show that the proposed approach outperforms static approaches in terms of BLR.
Title: Pattern Recognition Theory of Mind
Abstract: I propose that pattern recognition, memorization and processing are key concepts that can be a principle set for the theoretical modeling of the mind function. Most of the questions about the mind functioning can be answered by a descriptive modeling and definitions from these principles. An understandable consciousness definition can be drawn based on the assumption that a pattern recognition system can recognize its own patterns of activity. The principles, descriptive modeling and definitions can be a basis for theoretical and applied research on cognitive sciences, particularly at artificial intelligence studies.
Title: Fact Sheet on Semantic Web
Abstract: The report gives an overview about activities on the topic Semantic Web. It has been released as technical report for the project "KTweb -- Connecting Knowledge Technologies Communities" in 2003.
Title: Mean-Field Theory of Meta-Learning
Abstract: We discuss here the mean-field theory for a cellular automata model of meta-learning. The meta-learning is the process of combining outcomes of individual learning procedures in order to determine the final decision with higher accuracy than any single learning method. Our method is constructed from an ensemble of interacting, learning agents, that acquire and process incoming information using various types, or different versions of machine learning algorithms. The abstract learning space, where all agents are located, is constructed here using a fully connected model that couples all agents with random strength values. The cellular automata network simulates the higher level integration of information acquired from the independent learning trials. The final classification of incoming input data is therefore defined as the stationary state of the meta-learning system using simple majority rule, yet the minority clusters that share opposite classification outcome can be observed in the system. Therefore, the probability of selecting proper class for a given input data, can be estimated even without the prior knowledge of its affiliation. The fuzzy logic can be easily introduced into the system, even if learning agents are build from simple binary classification machine learning algorithms by calculating the percentage of agreeing agents.
Title: Shrinkage Algorithms for MMSE Covariance Estimation
Abstract: We address covariance estimation in the sense of minimum mean-squared error (MMSE) for Gaussian samples. Specifically, we consider shrinkage methods which are suitable for high dimensional problems with a small number of samples (large p small n). First, we improve on the Ledoit-Wolf (LW) method by conditioning on a sufficient statistic. By the Rao-Blackwell theorem, this yields a new estimator called RBLW, whose mean-squared error dominates that of LW for Gaussian variables. Second, to further reduce the estimation error, we propose an iterative approach which approximates the clairvoyant shrinkage estimator. Convergence of this iterative method is established and a closed form expression for the limit is determined, which is referred to as the oracle approximating shrinkage (OAS) estimator. Both RBLW and OAS estimators have simple expressions and are easily implemented. Although the two methods are developed from different persepctives, their structure is identical up to specified constants. The RBLW estimator provably dominates the LW method. Numerical simulations demonstrate that the OAS approach can perform even better than RBLW, especially when n is much less than p. We also demonstrate the performance of these techniques in the context of adaptive beamforming.
Title: Regeneration and Fixed-Width Analysis of Markov Chain Monte Carlo Algorithms
Abstract: In the thesis we take the split chain approach to analyzing Markov chains and use it to establish fixed-width results for estimators obtained via Markov chain Monte Carlo procedures (MCMC). Theoretical results include necessary and sufficient conditions in terms of regeneration for central limit theorems for ergodic Markov chains and a regenerative proof of a CLT version for uniformly ergodic Markov chains with $E_\pif^2< \infty.$ To obtain asymptotic confidence intervals for MCMC estimators, strongly consistent estimators of the asymptotic variance are essential. We relax assumptions required to obtain such estimators. Moreover, under a drift condition, nonasymptotic fixed-width results for MCMC estimators for a general state space setting (not necessarily compact) and not necessarily bounded target function $f$ are obtained. The last chapter is devoted to the idea of adaptive Monte Carlo simulation and provides convergence results and law of large numbers for adaptive procedures under path-stability condition for transition kernels.
Title: A survey of cross-validation procedures for model selection
Abstract: Used to estimate the risk of an estimator or to perform model selection, cross-validation is a widespread strategy because of its simplicity and its apparent universality. Many results exist on the model selection performances of cross-validation procedures. This survey intends to relate these results to the most recent advances of model selection theory, with a particular emphasis on distinguishing empirical statements from rigorous theoretical results. As a conclusion, guidelines are provided for choosing the best cross-validation procedure according to the particular features of the problem in hand.
Title: Spectral estimation of the L\'evy density in partially observed affine models
Abstract: The problem of estimating the L\'evy density of a partially observed multidimensional affine process from low-frequency and mixed-frequency data is considered. The estimation methodology is based on the log-affine representation of the conditional characteristic function of an affine process and local linear smoothing in time. We derive almost sure uniform rates of convergence for the estimated L\'evy density both in mixed-frequency and low-frequency setups and prove that these rates are optimal in the minimax sense. Finally, the performance of the estimation algorithms is illustrated in the case of the Bates stochastic volatility model.
Title: Nonasymptotic bounds on the estimation error for regenerative MCMC algorithms
Abstract: MCMC methods are used in Bayesian statistics not only to sample from posterior distributions but also to estimate expectations. Underlying functions are most often defined on a continuous state space and can be unbounded. We consider a regenerative setting and Monte Carlo estimators based on i.i.d. blocks of a Markov chain trajectory. The main result is an inequality for the mean square error. We also consider confidence bounds. We first derive the results in terms of the asymptotic variance and then bound the asymptotic variance for both uniformly ergodic and geometrically ergodic Markov chains.
Title: Ezhil: A Tamil Programming Language
Abstract: Ezhil is a Tamil language based interpreted procedural programming language. Tamil keywords and grammar are chosen to make the native Tamil speaker write programs in the Ezhil system. Ezhil allows easy representation of computer program closer to the Tamil language logical constructs equivalent to the conditional, branch and loop statements in modern English based programming languages. Ezhil is a compact programming language aimed towards Tamil speaking novice computer users. Grammar for Ezhil and a few example programs are reported here, from the initial proof-of-concept implementation using the Python programming language1. To the best of our knowledge, Ezhil language is the first freely available Tamil programming language.
Title: Automatic local Gabor Features extraction for face recognition
Abstract: We present in this paper a biometric system of face detection and recognition in color images. The face detection technique is based on skin color information and fuzzy classification. A new algorithm is proposed in order to detect automatically face features (eyes, mouth and nose) and extract their correspondent geometrical points. These fiducial points are described by sets of wavelet components which are used for recognition. To achieve the face recognition, we use neural networks and we study its performances for different inputs. We compare the two types of features used for recognition: geometric distances and Gabor coefficients which can be used either independently or jointly. This comparison shows that Gabor coefficients are more powerful than geometric distances. We show with experimental results how the importance recognition ratio makes our system an effective tool for automatic face detection and recognition.
Title: Restart Strategy Selection using Machine Learning Techniques
Abstract: Restart strategies are an important factor in the performance of conflict-driven Davis Putnam style SAT solvers. Selecting a good restart strategy for a problem instance can enhance the performance of a solver. Inspired by recent success applying machine learning techniques to predict the runtime of SAT solvers, we present a method which uses machine learning to boost solver performance through a smart selection of the restart strategy. Based on easy to compute features, we train both a satisfiability classifier and runtime models. We use these models to choose between restart strategies. We present experimental results comparing this technique with the most commonly used restart strategies. Our results demonstrate that machine learning is effective in improving solver performance.
Title: Online Search Cost Estimation for SAT Solvers
Abstract: We present two different methods for estimating the cost of solving SAT problems. The methods focus on the online behaviour of the backtracking solver, as well as the structure of the problem. Modern SAT solvers present several challenges to estimate search cost including coping with nonchronological backtracking, learning and restarts. Our first method adapt an existing algorithm for estimating the size of a search tree to deal with these challenges. We then suggest a second method that uses a linear model trained on data gathered online at the start of search. We compare the effectiveness of these two methods using random and structured problems. We also demonstrate that predictions made in early restarts can be used to improve later predictions. We conclude by showing that the cost of solving a set of problems can be reduced by selecting a solver from a portfolio based on such cost estimations.