text
stringlengths 0
4.09k
|
|---|
Abstract: Penalization procedures often suffer from their dependence on multiplying factors, whose optimal values are either unknown or hard to estimate from the data. We propose a completely data-driven calibration algorithm for this parameter in the least-squares regression framework, without assuming a particular shape for the penalty. Our algorithm relies on the concept of minimal penalty, recently introduced by Birge and Massart (2007) in the context of penalized least squares for Gaussian homoscedastic regression. On the positive side, the minimal penalty can be evaluated from the data themselves, leading to a data-driven estimation of an optimal penalty which can be used in practice; on the negative side, their approach heavily relies on the homoscedastic Gaussian nature of their stochastic framework. The purpose of this paper is twofold: stating a more general heuristics for designing a data-driven penalty (the slope heuristics) and proving that it works for penalized least-squares regression with a random design, even for heteroscedastic non-Gaussian data. For technical reasons, some exact mathematical results will be proved only for regressogram bin-width selection. This is at least a first step towards further results, since the approach and the method that we use are indeed general.
|
Title: Least angle and $\ell_1$ penalized regression: A review
|
Abstract: Least Angle Regression is a promising technique for variable selection applications, offering a nice alternative to stepwise regression. It provides an explanation for the similar behavior of LASSO ($\ell_1$-penalized regression) and forward stagewise regression, and provides a fast implementation of both. The idea has caught on rapidly, and sparked a great deal of research interest. In this paper, we give an overview of Least Angle Regression and the current state of related research.
|
Title: New Estimation Procedures for PLS Path Modelling
|
Abstract: Given R groups of numerical variables X1, ... XR, we assume that each group is the result of one underlying latent variable, and that all latent variables are bound together through a linear equation system. Moreover, we assume that some explanatory latent variables may interact pairwise in one or more equations. We basically consider PLS Path Modelling's algorithm to estimate both latent variables and the model's coefficients. New "external" estimation schemes are proposed that draw latent variables towards strong group structures in a more flexible way. New "internal" estimation schemes are proposed to enable PLSPM to make good use of variable group complementarity and to deal with interactions. Application examples are given.
|
Title: Calculations of Sobol indices for the Gaussian process metamodel
|
Abstract: Global sensitivity analysis of complex numerical models can be performed by calculating variance-based importance measures of the input variables, such as the Sobol indices. However, these techniques, requiring a large number of model evaluations, are often unacceptable for time expensive computer codes. A well known and widely used decision consists in replacing the computer code by a metamodel, predicting the model responses with a negligible computation time and rending straightforward the estimation of Sobol indices. In this paper, we discuss about the Gaussian process model which gives analytical expressions of Sobol indices. Two approaches are studied to compute the Sobol indices: the first based on the predictor of the Gaussian process model and the second based on the global stochastic process model. Comparisons between the two estimates, made on analytical examples, show the superiority of the second approach in terms of convergence and robustness. Moreover, the second approach allows to integrate the modeling error of the Gaussian process model by directly giving some confidence intervals on the Sobol indices. These techniques are finally applied to a real case of hydrogeological modeling.
|
Title: Detecting the overlapping and hierarchical community structure of complex networks
|
Abstract: Many networks in nature, society and technology are characterized by a mesoscopic level of organization, with groups of nodes forming tightly connected units, called communities or modules, that are only weakly linked to each other. Uncovering this community structure is one of the most important problems in the field of complex networks. Networks often show a hierarchical organization, with communities embedded within other communities; moreover, nodes can be shared between different communities. Here we present the first algorithm that finds both overlapping communities and the hierarchical structure. The method is based on the local optimization of a fitness function. Community structure is revealed by peaks in the fitness histogram. The resolution can be tuned by a parameter enabling to investigate different hierarchical levels of organization. Tests on real and artificial networks give excellent results.
|
Title: Learning Balanced Mixtures of Discrete Distributions with Small Sample
|
Abstract: We study the problem of partitioning a small sample of $n$ individuals from a mixture of $k$ product distributions over a Boolean cube $\0, 1\^K$ according to their distributions. Each distribution is described by a vector of allele frequencies in $\R^K$. Given two distributions, we use $\gamma$ to denote the average $\ell_2^2$ distance in frequencies across $K$ dimensions, which measures the statistical divergence between them. We study the case assuming that bits are independently distributed across $K$ dimensions. This work demonstrates that, for a balanced input instance for $k = 2$, a certain graph-based optimization function returns the correct partition with high probability, where a weighted graph $G$ is formed over $n$ individuals, whose pairwise hamming distances between their corresponding bit vectors define the edge weights, so long as $K = \Omega(\ln n/\gamma)$ and $Kn = \tilde\Omega(\ln n/\gamma^2)$. The function computes a maximum-weight balanced cut of $G$, where the weight of a cut is the sum of the weights across all edges in the cut. This result demonstrates a nice property in the high-dimensional feature space: one can trade off the number of features that are required with the size of the sample to accomplish certain tasks like clustering.
|
Title: Bayesian Nonlinear Principal Component Analysis Using Random Fields
|
Abstract: We propose a novel model for nonlinear dimension reduction motivated by the probabilistic formulation of principal component analysis. Nonlinearity is achieved by specifying different transformation matrices at different locations of the latent space and smoothing the transformation using a Markov random field type prior. The computation is made feasible by the recent advances in sampling from von Mises-Fisher distributions.
|
Title: Network as a computer: ranking paths to find flows
|
Abstract: We explore a simple mathematical model of network computation, based on Markov chains. Similar models apply to a broad range of computational phenomena, arising in networks of computers, as well as in genetic, and neural nets, in social networks, and so on. The main problem of interaction with such spontaneously evolving computational systems is that the data are not uniformly structured. An interesting approach is to try to extract the semantical content of the data from their distribution among the nodes. A concept is then identified by finding the community of nodes that share it. The task of data structuring is thus reduced to the task of finding the network communities, as groups of nodes that together perform some non-local data processing. Towards this goal, we extend the ranking methods from nodes to paths. This allows us to extract some information about the likely flow biases from the available static information about the network.
|
Title: A Bayesian reassessment of nearest-neighbour classification
|
Abstract: The k-nearest-neighbour procedure is a well-known deterministic method used in supervised classification. This paper proposes a reassessment of this approach as a statistical technique derived from a proper probabilistic model; in particular, we modify the assessment made in a previous analysis of this method undertaken by Holmes and Adams (2002,2003), and evaluated by Manocha and Girolami (2007), where the underlying probabilistic model is not completely well-defined. Once a clear probabilistic basis for the k-nearest-neighbour procedure is established, we derive computational tools for conducting Bayesian inference on the parameters of the corresponding model. In particular, we assess the difficulties inherent to pseudo-likelihood and to path sampling approximations of an intractable normalising constant, and propose a perfect sampling strategy to implement a correct MCMC sampler associated with our model. If perfect sampling is not available, we suggest using a Gibbs sampling approximation. Illustrations of the performance of the corresponding Bayesian classifier are provided for several benchmark datasets, demonstrating in particular the limitations of the pseudo-likelihood approximation in this set-up.
|
Title: Les Agents comme des interpr\'eteurs Scheme : Sp\'ecification dynamique par la communication
|
Abstract: We proposed in previous papers an extension and an implementation of the STROBE model, which regards the Agents as Scheme interpreters. These Agents are able to interpret messages in a dedicated environment including an interpreter that learns from the current conversation therefore representing evolving meta-level Agent's knowledge. When the Agent's interpreter is a nondeterministic one, the dialogues may consist of subsequent refinements of specifications in the form of constraint sets. The paper presents a worked out example of dynamic service generation - such as necessary on Grids - by exploiting STROBE Agents equipped with a nondeterministic interpreter. It shows how enabling dynamic specification of a problem. Then it illustrates how these principles could be effective for other applications. Details of the implementation are not provided here, but are available.
|
Title: Extreme Learning Machine for land cover classification
|
Abstract: This paper explores the potential of extreme learning machine based supervised classification algorithm for land cover classification. In comparison to a backpropagation neural network, which requires setting of several user-defined parameters and may produce local minima, extreme learning machine require setting of one parameter and produce a unique solution. ETM+ multispectral data set (England) was used to judge the suitability of extreme learning machine for remote sensing classifications. A back propagation neural network was used to compare its performance in term of classification accuracy and computational cost. Results suggest that the extreme learning machine perform equally well to back propagation neural network in term of classification accuracy with this data set. The computational cost using extreme learning machine is very small in comparison to back propagation neural network.
|
Title: A New Approach to Collaborative Filtering: Operator Estimation with Spectral Regularization
|
Abstract: We present a general approach for collaborative filtering (CF) using spectral regularization to learn linear operators from "users" to the "objects" they rate. Recent low-rank type matrix completion approaches to CF are shown to be special cases. However, unlike existing regularization based CF methods, our approach can be used to also incorporate information such as attributes of the users or the objects -- a limitation of existing regularization based CF methods. We then provide novel representer theorems that we use to develop new estimation methods. We provide learning algorithms based on low-rank decompositions, and test them on a standard CF dataset. The experiments indicate the advantages of generalizing the existing regularization based CF methods to incorporate related information about users and objects. Finally, we show that certain multi-task learning methods can be also seen as special cases of our proposed approach.
|
Title: On the $\ell_1-\ell_q$ Regularized Regression
|
Abstract: In this paper we consider the problem of grouped variable selection in high-dimensional regression using $\ell_1-\ell_q$ regularization ($1\leq q \leq \infty$), which can be viewed as a natural generalization of the $\ell_1-\ell_2$ regularization (the group Lasso). The key condition is that the dimensionality $p_n$ can increase much faster than the sample size $n$, i.e. $p_n \gg n$ (in our case $p_n$ is the number of groups), but the number of relevant groups is small. The main conclusion is that many good properties from $\ell_1-$regularization (Lasso) naturally carry on to the $\ell_1-\ell_q$ cases ($1 \leq q \leq \infty$), even if the number of variables within each group also increases with the sample size. With fixed design, we show that the whole family of estimators are both estimation consistent and variable selection consistent under different conditions. We also show the persistency result with random design under a much weaker condition. These results provide a unified treatment for the whole family of estimators ranging from $q=1$ (Lasso) to $q=\infty$ (iCAP), with $q=2$ (group Lasso)as a special case. When there is no group structure available, all the analysis reduces to the current results of the Lasso estimator ($q=1$).
|
Title: Stochastic Algorithm For Parameter Estimation For Dense Deformable Template Mixture Model
|
Abstract: Estimating probabilistic deformable template models is a new approach in the fields of computer vision and probabilistic atlases in computational anatomy. A first coherent statistical framework modelling the variability as a hidden random variable has been given by Allassonni\`ere, Amit and Trouv\'e in [1] in simple and mixture of deformable template models. A consistent stochastic algorithm has been introduced in [2] to face the problem encountered in [1] for the convergence of the estimation algorithm for the one component model in the presence of noise. We propose here to go on in this direction of using some "SAEM-like" algorithm to approximate the MAP estimator in the general Bayesian setting of mixture of deformable template model. We also prove the convergence of this algorithm toward a critical point of the penalised likelihood of the observations and illustrate this with handwritten digit images.
|
Title: M-decomposability, elliptical unimodal densities, and applications to clustering and kernel density estimation
|
Abstract: Chia and Nakano (2009) introduced the concept of M-decomposability of probability densities in one-dimension. In this paper, we generalize M-decomposability to any dimension. We prove that all elliptical unimodal densities are M-undecomposable. We also derive an inequality to show that it is better to represent an M-decomposable density via a mixture of unimodal densities. Finally, we demonstrate the application of M-decomposability to clustering and kernel density estimation, using real and simulated data. Our results show that M-decomposability can be used as a non-parametric criterion to locate modes in probability densities.
|
Title: Combining Expert Advice Efficiently
|
Abstract: We show how models for prediction with expert advice can be defined concisely and clearly using hidden Markov models (HMMs); standard HMM algorithms can then be used to efficiently calculate, among other things, how the expert predictions should be weighted according to the model. We cast many existing models as HMMs and recover the best known running times in each case. We also describe two new models: the switch distribution, which was recently developed to improve Bayesian/Minimum Description Length model selection, and a new generalisation of the fixed share algorithm based on run-length coding. We give loss bounds for all models and shed new light on their relationships.
|
Title: FINE: Fisher Information Non-parametric Embedding
|
Abstract: We consider the problems of clustering, classification, and visualization of high-dimensional data when no straightforward Euclidean representation exists. Typically, these tasks are performed by first reducing the high-dimensional data to some lower dimensional Euclidean space, as many manifold learning methods have been developed for this task. In many practical problems however, the assumption of a Euclidean manifold cannot be justified. In these cases, a more appropriate assumption would be that the data lies on a statistical manifold, or a manifold of probability density functions (PDFs). In this paper we propose using the properties of information geometry in order to define similarities between data sets using the Fisher information metric. We will show this metric can be approximated using entirely non-parametric methods, as the parameterization of the manifold is generally unknown. Furthermore, by using multi-dimensional scaling methods, we are able to embed the corresponding PDFs into a low-dimensional Euclidean space. This not only allows for classification of the data, but also visualization of the manifold. As a whole, we refer to our framework as Fisher Information Non-parametric Embedding (FINE), and illustrate its uses on a variety of practical problems, including bio-medical applications and document classification.
|
Title: Why stratification may hurt, & how much
|
Abstract: There are circumstances under which stratified sampling is worse than simple random sampling, even if the allocation of the sample sizes is optimal. This phenomenon was discovered more than sixty years ago, but is not as widely known as one might expect. We provide it with lower and upper bounds for its badness as well as with an explanation.
|
Title: New Implementation Framework for Saturation-Based Reasoning
|
Abstract: The saturation-based reasoning methods are among the most theoretically developed ones and are used by most of the state-of-the-art first-order logic reasoners. In the last decade there was a sharp increase in performance of such systems, which I attribute to the use of advanced calculi and the intensified research in implementation techniques. However, nowadays we are witnessing a slowdown in performance progress, which may be considered as a sign that the saturation-based technology is reaching its inherent limits. The position I am trying to put forward in this paper is that such scepticism is premature and a sharp improvement in performance may potentially be reached by adopting new architectural principles for saturation. The top-level algorithms and corresponding designs used in the state-of-the-art saturation-based theorem provers have (at least) two inherent drawbacks: the insufficient flexibility of the used inference selection mechanisms and the lack of means for intelligent prioritising of search directions. In this position paper I analyse these drawbacks and present two ideas on how they could be overcome. In particular, I propose a flexible low-cost high-precision mechanism for inference selection, intended to overcome problems associated with the currently used instances of clause selection-based procedures. I also outline a method for intelligent prioritising of search directions, based on probing the search space by exploring generalised search directions. I discuss some technical issues related to implementation of the proposed architectural principles and outline possible solutions.
|
Title: Support Vector classifiers for Land Cover Classification
|
Abstract: Support vector machines represent a promising development in machine learning research that is not widely used within the remote sensing community. This paper reports the results of Multispectral(Landsat-7 ETM+) and Hyperspectral DAIS)data in which multi-class SVMs are compared with maximum likelihood and artificial neural network methods in terms of classification accuracy. Our results show that the SVM achieves a higher level of classification accuracy than either the maximum likelihood or the neural classifier, and that the support vector machine can be used with small training datasets and high-dimensional data.
|
Title: A New Approach of Point Estimation from Truncated or Grouped and Censored Data
|
Abstract: We propose a new approach for estimating the parameters of a probability distribution. It consists on combining two new methods of estimation. The first is based on the definition of a new distance measuring the difference between variations of two distributions on a finite number of points from their support and on using this measure for estimation purposes by the method of minimum distance. For the second method, given an empirical discrete distribution, we build up an auxiliary discrete theoretical distribution having the same support of the first and depending on the same parameters of the parent distribution of the data from which the empirical distribution emanated. We estimate then the parameters from the empirical distribution by the usual statistical methods. In practice, we propose to compute the two estimations, the second based on maximum likelihood principle of known theoretical properties, and the first being as a control of the effectiveness of the obtained estimation, and for which we prove the convergence in probability, so we have also a criterion on the quality of the information contained in the observations. We apply the approach to truncated or grouped and censored data situations to give the flavour on the effectiveness of the approach. We give also some interesting perspectives of the approach including model selection from truncated data, estimation of the initial trial value in the celebrate EM algorithm in the case of truncation and merged normal populations, a test of goodness of fit based on the new distance, quality of estimations and data.
|
Title: A Radar-Shaped Statistic for Testing and Visualizing Uniformity Properties in Computer Experiments
|
Abstract: In the study of computer codes, filling space as uniformly as possible is important to describe the complexity of the investigated phenomenon. However, this property is not conserved by reducing the dimension. Some numeric experiment designs are conceived in this sense as Latin hypercubes or orthogonal arrays, but they consider only the projections onto the axes or the coordinate planes. In this article we introduce a statistic which allows studying the good distribution of points according to all 1-dimensional projections. By angularly scanning the domain, we obtain a radar type representation, allowing the uniformity defects of a design to be identified with respect to its projections onto straight lines. The advantages of this new tool are demonstrated on usual examples of space-filling designs (SFD) and a global statistic independent of the angle of rotation is studied.
|
Title: Textual Fingerprinting with Texts from Parkin, Bassewitz, and Leander
|
Abstract: Current research in author profiling to discover a legal author's fingerprint does not only follow examinations based on statistical parameters only but include more and more dynamic methods that can learn and that react adaptable to the specific behavior of an author. But the question on how to appropriately represent a text is still one of the fundamental tasks, and the problem of which attribute should be used to fingerprint the author's style is still not exactly defined. In this work, we focus on linguistic selection of attributes to fingerprint the style of the authors Parkin, Bassewitz and Leander. We use texts of the genre Fairy Tale as it has a clear style and texts of a shorter size with a straightforward story-line and a simple language.
|
Title: Compressed Counting
|
Abstract: Counting is among the most fundamental operations in computing. For example, counting the pth frequency moment has been a very active area of research, in theoretical computer science, databases, and data mining. When p=1, the task (i.e., counting the sum) can be accomplished using a simple counter. Compressed Counting (CC) is proposed for efficiently computing the pth frequency moment of a data stream signal A_t, where 0<p<=2. CC is applicable if the streaming data follow the Turnstile model, with the restriction that at the time t for the evaluation, A_t[i]>= 0, which includes the strict Turnstile model as a special case. For natural data streams encountered in practice, this restriction is minor. The underly technique for CC is what we call skewed stable random projections, which captures the intuition that, when p=1 a simple counter suffices, and when p = 1+/\Delta with small \Delta, the sample complexity of a counter system should be low (continuously as a function of \Delta). We show at small \Delta the sample complexity (number of projections) k = O(1/\epsilon) instead of O(1/\epsilon^2). Compressed Counting can serve a basic building block for other tasks in statistics and computing, for example, estimation entropies of data streams, parameter estimations using the method of moments and maximum likelihood. Finally, another contribution is an algorithm for approximating the logarithmic norm, \sum_i=1^D\log A_t[i], and logarithmic distance. The logarithmic distance is useful in machine learning practice with heavy-tailed data.
|
Title: Higher-Order Properties of Analytic Wavelets
|
Abstract: The influence of higher-order wavelet properties on the analytic wavelet transform behavior is investigated, and wavelet functions offering advantageous performance are identified. This is accomplished through detailed investigation of the generalized Morse wavelets, a two-parameter family of exactly analytic continuous wavelets. The degree of time/frequency localization, the existence of a mapping between scale and frequency, and the bias involved in estimating properties of modulated oscillatory signals, are proposed as important considerations. Wavelet behavior is found to be strongly impacted by the degree of asymmetry of the wavelet in both the frequency and the time domain, as quantified by the third central moments. A particular subset of the generalized Morse wavelets, recognized as deriving from an inhomogeneous Airy function, emerge as having particularly desirable properties. These "Airy wavelets" substantially outperform the only approximately analytic Morlet wavelets for high time localization. Special cases of the generalized Morse wavelets are examined, revealing a broad range of behaviors which can be matched to the characteristics of a signal.
|
Title: Multiclass Approaches for Support Vector Machine Based Land Cover Classification
|
Abstract: SVMs were initially developed to perform binary classification; though, applications of binary classification are very limited. Most of the practical applications involve multiclass classification, especially in remote sensing land cover classification. A number of methods have been proposed to implement SVMs to produce multiclass classification. A number of methods to generate multiclass SVMs from binary SVMs have been proposed by researchers and is still a continuing research topic. This paper compares the performance of six multi-class approaches to solve classification problem with remote sensing data in term of classification accuracy and computational cost. One vs. one, one vs. rest, Directed Acyclic Graph (DAG), and Error Corrected Output Coding (ECOC) based multiclass approaches creates many binary classifiers and combines their results to determine the class label of a test pixel. Another catogery of multi class approach modify the binary class objective function and allows simultaneous computation of multiclass classification by solving a single optimisation problem. Results from this study conclude the usefulness of One vs. One multi class approach in term of accuracy and computational cost over other multi class approaches.
|
Title: Controlled stratification for quantile estimation
|
Abstract: In this paper we propose and discuss variance reduction techniques for the estimation of quantiles of the output of a complex model with random input parameters. These techniques are based on the use of a reduced model, such as a metamodel or a response surface. The reduced model can be used as a control variate; or a rejection method can be implemented to sample the realizations of the input parameters in prescribed relevant strata; or the reduced model can be used to determine a good biased distribution of the input parameters for the implementation of an importance sampling strategy. The different strategies are analyzed and the asymptotic variances are computed, which shows the benefit of an adaptive controlled stratification method. This method is finally applied to a real example (computation of the peak cladding temperature during a large-break loss of coolant accident in a nuclear reactor).
|
Title: Sign Language Tutoring Tool
|
Abstract: In this project, we have developed a sign language tutor that lets users learn isolated signs by watching recorded videos and by trying the same signs. The system records the user's video and analyses it. If the sign is recognized, both verbal and animated feedback is given to the user. The system is able to recognize complex signs that involve both hand gestures and head movements and expressions. Our performance tests yield a 99% recognition rate on signs involving only manual gestures and 85% recognition rate on signs that involve both manual and non manual components, such as head movement and facial expressions.
|
Title: Anisotropic selection in cellular genetic algorithms
|
Abstract: In this paper we introduce a new selection scheme in cellular genetic algorithms (cGAs). Anisotropic Selection (AS) promotes diversity and allows accurate control of the selective pressure. First we compare this new scheme with the classical rectangular grid shapes solution according to the selective pressure: we can obtain the same takeover time with the two techniques although the spreading of the best individual is different. We then give experimental results that show to what extent AS promotes the emergence of niches that support low coupling and high cohesion. Finally, using a cGA with anisotropic selection on a Quadratic Assignment Problem we show the existence of an anisotropic optimal value for which the best average performance is observed. Further work will focus on the selective pressure self-adjustment ability provided by this new selection scheme.
|
Title: A Localization Approach to Improve Iterative Proportional Scaling in Gaussian Graphical Models
|
Abstract: We discuss an efficient implementation of the iterative proportional scaling procedure in the multivariate Gaussian graphical models. We show that the computational cost can be reduced by localization of the update procedure in each iterative step by using the structure of a decomposable model obtained by triangulation of the graph associated with the model. Some numerical experiments demonstrate the competitive performance of the proposed algorithm.
|
Title: A Markov Basis for Conditional Test of Common Diagonal Effect in Quasi-Independence Model for Square Contingency Tables
|
Abstract: In two-way contingency tables we sometimes find that frequencies along the diagonal cells are relatively larger(or smaller) compared to off-diagonal cells, particularly in square tables with the common categories for the rows and the columns. In this case the quasi-independence model with an additional parameter for each of the diagonal cells is usually fitted to the data. A simpler model than the quasi-independence model is to assume a common additional parameter for all the diagonal cells. We consider testing the goodness of fit of the common diagonal effect by Markov chain Monte Carlo (MCMC) method. We derive an explicit form of a Markov basis for performing the conditional test of the common diagonal effect. Once a Markov basis is given, MCMC procedure can be easily implemented by techniques of algebraic statistics. We illustrate the procedure with some real data sets.
|
Title: The normal distribution in some constrained sample spaces
|
Abstract: Phenomena with a constrained sample space appear frequently in practice. This is the case e.g. with strictly positive data and with compositional data, like percentages and the like. If the natural measure of difference is not the absolute one, it is possible to use simple algebraic properties to show that it is more convenient to work with a geometry that is not the usual Euclidean geometry in real space, and with a measure which is not the usual Lebesgue measure, leading to alternative models which better fit the phenomenon under study. The general approach is presented and illustrated both on the positive real line and on the D-part simplex.
|
Title: Pure Exploration for Multi-Armed Bandit Problems
|
Abstract: We consider the framework of stochastic multi-armed bandit problems and study the possibilities and limitations of forecasters that perform an on-line exploration of the arms. These forecasters are assessed in terms of their simple regret, a regret notion that captures the fact that exploration is only constrained by the number of available rounds (not necessarily known in advance), in contrast to the case when the cumulative regret is considered and when exploitation needs to be performed at the same time. We believe that this performance criterion is suited to situations when the cost of pulling an arm is expressed in terms of resources rather than rewards. We discuss the links between the simple and the cumulative regret. One of the main results in the case of a finite number of arms is a general lower bound on the simple regret of a forecaster in terms of its cumulative regret: the smaller the latter, the larger the former. Keeping this result in mind, we then exhibit upper bounds on the simple regret of some forecasters. The paper ends with a study devoted to continuous-armed bandit problems; we show that the simple regret can be minimized with respect to a family of probability distributions if and only if the cumulative regret can be minimized for it. Based on this equivalence, we are able to prove that the separable metric spaces are exactly the metric spaces on which these regrets can be minimized with respect to the family of all probability distributions with continuous mean-payoff functions.
|
Title: Time Varying Undirected Graphs
|
Abstract: Undirected graphs are often used to describe high dimensional distributions. Under sparsity conditions, the graph can be estimated using $\ell_1$ penalization methods. However, current methods assume that the data are independent and identically distributed. If the distribution, and hence the graph, evolves over time then the data are not longer identically distributed. In this paper, we show how to estimate the sequence of graphs for non-identically distributed data, where the distribution evolves over time.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.