text
stringlengths
0
4.09k
Abstract: In this contribution, we propose a generic online (also sometimes called adaptive or recursive) version of the Expectation-Maximisation (EM) algorithm applicable to latent variable models of independent observations. Compared to the algorithm of Titterington (1984), this approach is more directly connected to the usual EM algorithm and does not rely on integration with respect to the complete data distribution. The resulting algorithm is usually simpler and is shown to achieve convergence to the stationary points of the Kullback-Leibler divergence between the marginal distribution of the observation and the model distribution at the optimal rate, i.e., that of the maximum likelihood estimator. In addition, the proposed approach is also suitable for conditional (or regression) models, as illustrated in the case of the mixture of linear regressions model.
Title: Updating Probabilities: A Complex Agent Based Example
Abstract: It has been shown that one can accommodate data (Bayes) and constraints (MaxEnt) in one method, the method of Maximum (relative) Entropy (ME) (Giffin 2007). In this paper we show a complex agent based example of inference with two different forms of information; moments and data. In this example, several agents each receive partial information about a system in the form of data. In addition, each agent agrees or is informed that there are certain global constraints on the system that are always true. The agents are then asked to make inferences about the entire system. The system becomes more complex as we add agents and allow them to share information. This system can have a geometrical form, such as a crystal structure. The shape may dictate how the agents are able to share information, such as sharing with nearest neighbors. This method can be used to model many systems where the agents or cells have local or partial information but must adhere to some global rules. This could also illustrate how the agents evolve and could illuminate emergent behavior of the system.
Title: Convergence of Expected Utilities with Algorithmic Probability Distributions
Abstract: We consider an agent interacting with an unknown environment. The environment is a function which maps natural numbers to natural numbers; the agent's set of hypotheses about the environment contains all such functions which are computable and compatible with a finite set of known input-output pairs, and the agent assigns a positive probability to each such hypothesis. We do not require that this probability distribution be computable, but it must be bounded below by a positive computable function. The agent has a utility function on outputs from the environment. We show that if this utility function is bounded below in absolute value by an unbounded computable function, then the expected utility of any input is undefined. This implies that a computable utility function will have convergent expected utilities iff that function is bounded.
Title: Dispersion Models for Extremes
Abstract: We propose extreme value analogues of natural exponential families and exponential dispersion models, and introduce the slope function as an analogue of the variance function. The set of quadratic and power slope functions characterize well-known families such as the Rayleigh, Gumbel, power, Pareto, logistic, negative exponential, Weibull and Fr\'echet. We show a convergence theorem for slope functions, by which we may express the classical extreme value convergence results in terms of asymptotics for extreme dispersion models. The main idea is to explore the parallels between location families and natural exponential families, and between the convolution and minimum operations.
Title: Judgment
Abstract: The concept of a judgment as a logical action which introduces new information into a deductive system is examined. This leads to a way of mathematically representing implication which is distinct from the familiar material implication, according to which "If A then B" is considered to be equivalent to "B or not-A". This leads, in turn, to a resolution of the paradox of the raven.
Title: Does intelligence imply contradiction?
Abstract: Contradiction is often seen as a defect of intelligent systems and a dangerous limitation on efficiency. In this paper we raise the question of whether, on the contrary, it could be considered a key tool in increasing intelligence in biological structures. A possible way of answering this question in a mathematical context is shown, formulating a proposition that suggests a link between intelligence and contradiction. A concrete approach is presented in the well-defined setting of cellular automata. Here we define the models of ``observer'', ``entity'', ``environment'', ``intelligence'' and ``contradiction''. These definitions, which roughly correspond to the common meaning of these words, allow us to deduce a simple but strong result about these concepts in an unbiased, mathematical manner. Evidence for a real-world counterpart to the demonstrated formal link between intelligence and contradiction is provided by three computational experiments.
Title: Toward a statistical mechanics of four letter words
Abstract: We consider words as a network of interacting letters, and approximate the probability distribution of states taken on by this network. Despite the intuition that the rules of English spelling are highly combinatorial (and arbitrary), we find that maximum entropy models consistent with pairwise correlations among letters provide a surprisingly good approximation to the full statistics of four letter words, capturing 92% of the multi-information among letters and even "discovering" real words that were not represented in the data from which the pairwise correlations were estimated. The maximum entropy model defines an energy landscape on the space of possible words, and local minima in this landscape account for nearly two-thirds of words used in written English.
Title: Nonparametric sequential prediction of time series
Abstract: Time series prediction covers a vast field of every-day statistical applications in medical, environmental and economic domains. In this paper we develop nonparametric prediction strategies based on the combination of a set of 'experts' and show the universal consistency of these strategies under a minimum of conditions. We perform an in-depth analysis of real-world data sets and show that these nonparametric strategies are more flexible, faster and generally outperform ARMA methods in terms of normalized cumulative prediction error.
Title: Exactness of Belief Propagation for Some Graphical Models with Loops
Abstract: It is well known that an arbitrary graphical model of statistical inference defined on a tree, i.e. on a graph without loops, is solved exactly and efficiently by an iterative Belief Propagation (BP) algorithm convergent to unique minimum of the so-called Bethe free energy functional. For a general graphical model on a loopy graph the functional may show multiple minima, the iterative BP algorithm may converge to one of the minima or may not converge at all, and the global minimum of the Bethe free energy functional is not guaranteed to correspond to the optimal Maximum-Likelihood (ML) solution in the zero-temperature limit. However, there are exceptions to this general rule, discussed in and in two different contexts, where zero-temperature version of the BP algorithm finds ML solution for special models on graphs with loops. These two models share a key feature: their ML solutions can be found by an efficient Linear Programming (LP) algorithm with a Totally-Uni-Modular (TUM) matrix of constraints. Generalizing the two models we consider a class of graphical models reducible in the zero temperature limit to LP with TUM constraints. Assuming that a gedanken algorithm, g-BP, funding the global minimum of the Bethe free energy is available we show that in the limit of zero temperature g-BP outputs the ML solution. Our consideration is based on equivalence established between gapless Linear Programming (LP) relaxation of the graphical model in the $T\to 0$ limit and respective LP version of the Bethe-Free energy minimization.
Title: Staring at Economic Aggregators through Information Lenses
Abstract: It is hard to exaggerate the role of economic aggregators -- functions that summarize numerous and / or heterogeneous data -- in economic models since the early XX$^th$ century. In many cases, as witnessed by the pioneering works of Cobb and Douglas, these functions were information quantities tailored to economic theories, i.e. they were built to fit economic phenomena. In this paper, we look at these functions from the complementary side: information. We use a recent toolbox built on top of a vast class of distortions coined by Bregman, whose application field rivals metrics' in various subfields of mathematics. This toolbox makes it possible to find the quality of an aggregator (for consumptions, prices, labor, capital, wages, etc.), from the standpoint of the information it carries. We prove a rather striking result. From the informational standpoint, well-known economic aggregators do belong to the set. As common economic assumptions enter the analysis, this large set shrinks, and it essentially ends up either CES, or Cobb-Douglas, or both. To summarize, in the relevant economic contexts, one could not have crafted better some aggregator from the information standpoint. We also discuss global economic behaviors of optimal information aggregators in general, and present a brief panorama of the links between economic and information aggregators. Keywords: Economic Aggregators, CES, Cobb-Douglas, Bregman divergences
Title: An Alternative Prior Process for Nonparametric Bayesian Clustering
Abstract: Prior distributions play a crucial role in Bayesian approaches to clustering. Two commonly-used prior distributions are the Dirichlet and Pitman-Yor processes. In this paper, we investigate the predictive probabilities that underlie these processes, and the implicit "rich-get-richer" characteristic of the resulting partitions. We explore an alternative prior for nonparametric Bayesian clustering -- the uniform process -- for applications where the "rich-get-richer" property is undesirable. We also explore the cost of this process: partitions are no longer exchangeable with respect to the ordering of variables. We present new asymptotic and simulation-based results for the clustering characteristics of the uniform process and compare these with known results for the Dirichlet and Pitman-Yor processes. We compare performance on a real document clustering task, demonstrating the practical advantage of the uniform process despite its lack of exchangeability over orderings.
Title: Adjusted Bayesian inference for selected parameters
Abstract: We address the problem of providing inference from a Bayesian perspective for parameters selected after viewing the data. We present a Bayesian framework for providing inference for selected parameters, based on the observation that providing Bayesian inference for selected parameters is a truncated data problem. We show that if the prior for the parameter is non-informative, or if the parameter is a "fixed" unknown constant, then it is necessary to adjust the Bayesian inference for selection. Our second contribution is the introduction of Bayesian False Discovery Rate controlling methodology,which generalizes existing Bayesian FDR methods that are only defined in the two-group mixture model.We illustrate our results by applying them to simulated data and data froma microarray experiment.
Title: Implementation of perception and action at nanoscale
Abstract: Real time combination of nanosensors and nanoactuators with virtual reality environment and multisensorial interfaces enable us to efficiently act and perceive at nanoscale. Advanced manipulation of nanoobjects and new strategies for scientific education are the key motivations. We have no existing intuitive representation of the nanoworld ruled by laws foreign to our experience. A central challenge is then the construction of nanoworld simulacrum that we can start to visit and to explore. In this nanoworld simulacrum, object identifications will be based on probed entity physical and chemical intrinsic properties, on their interactions with sensors and on the final choices made in building a multisensorial interface so that these objects become coherent elements of the human sphere of action and perception. Here we describe a 1D virtual nanomanipulator, part of the Cit\'e des Sciences EXPO NANO in Paris, that is the first realization based on this program.
Title: Evolution of central pattern generators for the control of a five-link bipedal walking mechanism
Abstract: Central pattern generators (CPGs), with a basis is neurophysiological studies, are a type of neural network for the generation of rhythmic motion. While CPGs are being increasingly used in robot control, most applications are hand-tuned for a specific task and it is acknowledged in the field that generic methods and design principles for creating individual networks for a given task are lacking. This study presents an approach where the connectivity and oscillatory parameters of a CPG network are determined by an evolutionary algorithm with fitness evaluations in a realistic simulation with accurate physics. We apply this technique to a five-link planar walking mechanism to demonstrate its feasibility and performance. In addition, to see whether results from simulation can be acceptably transferred to real robot hardware, the best evolved CPG network is also tested on a real mechanism. Our results also confirm that the biologically inspired CPG model is well suited for legged locomotion, since a diverse manifestation of networks have been observed to succeed in fitness simulations during evolution.
Title: Batch kernel SOM and related Laplacian methods for social network analysis
Abstract: Large graphs are natural mathematical models for describing the structure of the data in a wide variety of fields, such as web mining, social networks, information retrieval, biological networks, etc. For all these applications, automatic tools are required to get a synthetic view of the graph and to reach a good understanding of the underlying problem. In particular, discovering groups of tightly connected vertices and understanding the relations between those groups is very important in practice. This paper shows how a kernel version of the batch Self Organizing Map can be used to achieve these goals via kernels derived from the Laplacian matrix of the graph, especially when it is used in conjunction with more classical methods based on the spectral analysis of the graph. The proposed method is used to explore the structure of a medieval social network modeled through a weighted graph that has been directly built from a large corpus of agrarian contracts.
Title: Estimating of $P(Y<X)$ in the Exponential case Based on Censored Samples
Abstract: In this article, the estimation of reliability of a system is discussed $p(y<x)$ when strength, $X$, and stress, $Y$, are two independent exponential distribution with different scale parameters when the available data are type II Censored sample. Different methods for estimating the reliability are applied. The point estimators obtained are maximum likelihood estimator, uniformly minimum variance unbiased estimator, and Bayesian estimators based on conjugate and non informative prior distributions. A comparison of the estimates obtained is performed. Interval estimators of the reliability are also discussed.
Title: Corpus sp\'ecialis\'e et ressource de sp\'ecialit\'e
Abstract: "Semantic Atlas" is a mathematic and statistic model to visualise word senses according to relations between words. The model, that has been applied to proximity relations from a corpus, has shown its ability to distinguish word senses as the corpus' contributors comprehend them. We propose to use the model and a specialised corpus in order to create automatically a specialised dictionary relative to the corpus' domain. A morpho-syntactic analysis performed on the corpus makes it possible to create the dictionary from syntactic relations between lexical units. The semantic resource can be used to navigate semantically - and not only lexically - through the corpus, to create classical dictionaries or for diachronic studies of the language.
Title: Imprecise probability trees: Bridging two theories of imprecise probability
Abstract: We give an overview of two approaches to probability theory where lower and upper probabilities, rather than probabilities, are used: Walley's behavioural theory of imprecise probabilities, and Shafer and Vovk's game-theoretic account of probability. We show that the two theories are more closely related than would be suspected at first sight, and we establish a correspondence between them that (i) has an interesting interpretation, and (ii) allows us to freely import results from one theory into the other. Our approach leads to an account of probability trees and random processes in the framework of Walley's theory. We indicate how our results can be used to reduce the computational complexity of dealing with imprecision in probability trees, and we prove an interesting and quite general version of the weak law of large numbers.
Title: Exchangeable lower previsions
Abstract: We extend de Finetti's (1937) notion of exchangeability to finite and countable sequences of variables, when a subject's beliefs about them are modelled using coherent lower previsions rather than (linear) previsions. We prove representation theorems in both the finite and the countable case, in terms of sampling without and with replacement, respectively. We also establish a convergence result for sample means of exchangeable sequences. Finally, we study and solve the problem of exchangeable natural extension: how to find the most conservative (point-wise smallest) coherent and exchangeable lower prevision that dominates a given lower prevision.
Title: Le terme et le concept : fondements d'une ontoterminologie
Abstract: Most definitions of ontology, viewed as a "specification of a conceptualization", agree on the fact that if an ontology can take different forms, it necessarily includes a vocabulary of terms and some specification of their meaning in relation to the domain's conceptualization. And as domain knowledge is mainly conveyed through scientific and technical texts, we can hope to extract some useful information from them for building ontology. But is it as simple as this? In this article we shall see that the lexical structure, i.e. the network of words linked by linguistic relationships, does not necessarily match the domain conceptualization. We have to bear in mind that writing documents is the concern of textual linguistics, of which one of the principles is the incompleteness of text, whereas building ontology - viewed as task-independent knowledge - is concerned with conceptualization based on formal and not natural languages. Nevertheless, the famous Sapir and Whorf hypothesis, concerning the interdependence of thought and language, is also applicable to formal languages. This means that the way an ontology is built and a concept is defined depends directly on the formal language which is used; and the results will not be the same. The introduction of the notion of ontoterminology allows to take into account epistemological principles for formal ontology building.
Title: Stream Computing
Abstract: Stream computing is the use of multiple autonomic and parallel modules together with integrative processors at a higher level of abstraction to embody "intelligent" processing. The biological basis of this computing is sketched and the matter of learning is examined.
Title: The emerging field of language dynamics
Abstract: A simple review by a linguist, citing many articles by physicists: Quantitative methods, agent-based computer simulations, language dynamics, language typology, historical linguistics
Title: Parameterizations and fitting of bi-directed graph models to categorical data
Abstract: We discuss two parameterizations of models for marginal independencies for discrete distributions which are representable by bi-directed graph models, under the global Markov property. Such models are useful data analytic tools especially if used in combination with other graphical models. The first parameterization, in the saturated case, is also known as the multivariate logistic transformation, the second is a variant that allows, in some (but not all) cases, variation independent parameters. An algorithm for maximum likelihood fitting is proposed, based on an extension of the Aitchison and Silvey method.
Title: Parametric and nonparametric models and methods in financial econometrics
Abstract: Financial econometrics has become an increasingly popular research field. In this paper we review a few parametric and nonparametric models and methods used in this area. After introducing several widely used continuous-time and discrete-time models, we study in detail dependence structures of discrete samples, including Markovian property, hidden Markovian structure, contaminated observations, and random samples. We then discuss several popular parametric and nonparametric estimation methods. To avoid model mis-specification, model validation plays a key role in financial modeling. We discuss several model validation techniques, including pseudo-likelihood ratio test, nonparametric curve regression based test, residuals based test, generalized likelihood ratio test, simultaneous confidence band construction, and density based test. Finally, we briefly touch on tools for studying large sample properties.
Title: Computational approach to the emergence and evolution of language - evolutionary naming game model
Abstract: Computational modelling with multi-agent systems is becoming an important technique of studying language evolution. We present a brief introduction into this rapidly developing field, as well as our own contributions that include an analysis of the evolutionary naming-game model. In this model communicating agents, that try to establish a common vocabulary, are equipped with an evolutionarily selected learning ability. Such a coupling of biological and linguistic ingredients results in an abrupt transition: upon a small change of the model control parameter a poorly communicating group of linguistically unskilled agents transforms into almost perfectly communicating group with large learning abilities. Genetic imprinting of the learning abilities proceeds via Baldwin effect: initially unskilled communicating agents learn a language and that creates a niche in which there is an evolutionary pressure for the increase of learning ability. Under the assumption that communication intensity increases continuously with finite speed, the transition is split into several transition-like changes. It shows that the speed of cultural changes, that sets an additional characteristic timescale, might be yet another factor affecting the evolution of language. In our opinion, this model shows that linguistic and biological processes have a strong influence on each other and this effect certainly has contributed to an explosive development of our species.
Title: A new transform for solving the noisy complex exponentials approximation problem
Abstract: The problem of estimating a complex measure made up by a linear combination of Dirac distributions centered on points of the complex plane from a finite number of its complex moments affected by additive i.i.d. Gaussian noise is considered. A random measure is defined whose expectation approximates the unknown measure under suitable conditions. An estimator of the approximating measure is then proposed as well as a new discrete transform of the noisy moments that allows to compute an estimate of the unknown measure. A small simulation study is also performed to experimentally check the goodness of the approximations.
Title: Adaptive Independent Metropolis-Hastings by Fast Estimation of Mixtures of Normals
Abstract: We construct an adaptive independent Metropolis-Hastings sampler that uses a mixture of normals as a proposal distribution. To take full advantage of the potential of adaptive sampling our algorithm updates the mixture of normals frequently, starting early in the chain. The algorithm is built for speed and reliability and its sampling performance is evaluated with real and simulated examples. Our article outlines conditions for adaptive sampling to hold and gives a readily accessible proof that under these conditions the sampling scheme generates iterates that converge to the target distribution.
Title: Online variants of the cross-entropy method
Abstract: The cross-entropy method is a simple but efficient method for global optimization. In this paper we provide two online variants of the basic CEM, together with a proof of convergence.
Title: Factored Value Iteration Converges
Abstract: In this paper we propose a novel algorithm, factored value iteration (FVI), for the approximate solution of factored Markov decision processes (fMDPs). The traditional approximate value iteration algorithm is modified in two ways. For one, the least-squares projection operator is modified so that it does not increase max-norm, and thus preserves convergence. The other modification is that we uniformly sample polynomially many samples from the (exponentially large) state space. This way, the complexity of our algorithm becomes polynomial in the size of the fMDP description length. We prove that the algorithm is convergent. We also derive an upper bound on the difference between our approximate solution and the optimal one, and also on the error introduced by sampling. We analyze various projection operators with respect to their computation complexity and their convergence when combined with approximate value iteration.
Title: A Comparison of natural (english) and artificial (esperanto) languages. A Multifractal method based analysis
Abstract: We present a comparison of two english texts, written by Lewis Carroll, one (Alice in wonderland) and the other (Through a looking glass), the former translated into esperanto, in order to observe whether natural and artificial languages significantly differ from each other. We construct one dimensional time series like signals using either word lengths or word frequencies. We use the multifractal ideas for sorting out correlations in the writings. In order to check the robustness of the methods we also write the corresponding shuffled texts. We compare characteristic functions and e.g. observe marked differences in the (far from parabolic) f(alpha) curves, differences which we attribute to Tsallis non extensive statistical features in the ''frequency time series'' and ''length time series''. The esperanto text has more extreme vallues. A very rough approximation consists in modeling the texts as a random Cantor set if resulting from a binomial cascade of long and short words (or words and blanks). This leads to parameters characterizing the text style, and most likely in fine the author writings.
Title: Penalized Clustering of Large Scale Functional Data with Multiple Covariates
Abstract: In this article, we propose a penalized clustering method for large scale data with multiple covariates through a functional data approach. In the proposed method, responses and covariates are linked together through nonparametric multivariate functions (fixed effects), which have great flexibility in modeling a variety of function features, such as jump points, branching, and periodicity. Functional ANOVA is employed to further decompose multivariate functions in a reproducing kernel Hilbert space and provide associated notions of main effect and interaction. Parsimonious random effects are used to capture various correlation structures. The mixed-effect models are nested under a general mixture model, in which the heterogeneity of functional data is characterized. We propose a penalized Henderson's likelihood approach for model-fitting and design a rejection-controlled EM algorithm for the estimation. Our method selects smoothing parameters through generalized cross-validation. Furthermore, the Bayesian confidence intervals are used to measure the clustering uncertainty. Simulation studies and real-data examples are presented to investigate the empirical performance of the proposed method. Open-source code is available in the R package MFDA.
Title: Some thoughts on the asymptotics of the deconvolution kernel density estimator
Abstract: Via a simulation study we compare the finite sample performance of the deconvolution kernel density estimator in the supersmooth deconvolution problem to its asymptotic behaviour predicted by two asymptotic normality theorems. Our results indicate that for lower noise levels and moderate sample sizes the match between the asymptotic theory and the finite sample performance of the estimator is not satisfactory. On the other hand we show that the two approaches produce reasonably close results for higher noise levels. These observations in turn provide additional motivation for the study of deconvolution problems under the assumption that the error term variance $\sigma^2\to 0$ as the sample size $n\to\infty.$
Title: A greedy approach to sparse canonical correlation analysis
Abstract: We consider the problem of sparse canonical correlation analysis (CCA), i.e., the search for two linear combinations, one for each multivariate, that yield maximum correlation using a specified number of variables. We propose an efficient numerical approximation based on a direct greedy approach which bounds the correlation at each stage. The method is specifically designed to cope with large data sets and its computational complexity depends only on the sparsity levels. We analyze the algorithm's performance through the tradeoff between correlation and parsimony. The results of numerical simulation suggest that a significant portion of the correlation may be captured using a relatively small number of variables. In addition, we examine the use of sparse CCA as a regularization method when the number of available samples is small compared to the dimensions of the multivariates.
Title: Strongly Consistent Model Order Selection for Estimating 2-D Sinusoids in Colored Noise
Abstract: We consider the problem of jointly estimating the number as well as the parameters of two-dimensional sinusoidal signals, observed in the presence of an additive colored noise field. We begin by elaborating on the least squares estimation of 2-D sinusoidal signals, when the assumed number of sinusoids is incorrect. In the case where the number of sinusoidal signals is under-estimated we show the almost sure convergence of the least squares estimates to the parameters of the dominant sinusoids. In the case where this number is over-estimated, the estimated parameter vector obtained by the least squares estimator contains a sub-vector that converges almost surely to the correct parameters of the sinusoids. Based on these results, we prove the strong consistency of a new model order selection rule.