text
stringlengths
0
4.09k
Title: Kernel estimators of asymptotic variance for adaptive Markov chain Monte Carlo
Abstract: We study the asymptotic behavior of kernel estimators of asymptotic variances (or long-run variances) for a class of adaptive Markov chains. The convergence is studied both in $L^p$ and almost surely. The results also apply to Markov chains and improve on the existing literature by imposing weaker conditions. We illustrate the results with applications to the $\operatorname GARCH(1,1)$ Markov model and to an adaptive MCMC algorithm for Bayesian logistic regression.
Title: Sharp Dichotomies for Regret Minimization in Metric Spaces
Abstract: The Lipschitz multi-armed bandit (MAB) problem generalizes the classical multi-armed bandit problem by assuming one is given side information consisting of a priori upper bounds on the difference in expected payoff between certain pairs of strategies. Classical results of (Lai and Robbins 1985) and (Auer et al. 2002) imply a logarithmic regret bound for the Lipschitz MAB problem on finite metric spaces. Recent results on continuum-armed bandit problems and their generalizations imply lower bounds of $$, or stronger, for many infinite metric spaces such as the unit interval. Is this dichotomy universal? We prove that the answer is yes: for every metric space, the optimal regret of a Lipschitz MAB algorithm is either bounded above by any $f\in \omega(\log t)$, or bounded below by any $g\in o()$. Perhaps surprisingly, this dichotomy does not coincide with the distinction between finite and infinite metric spaces; instead it depends on whether the completion of the metric space is compact and countable. Our proof connects upper and lower bound techniques in online learning with classical topological notions such as perfect sets and the Cantor-Bendixson theorem. Among many other results, we show a similar dichotomy for the "full-feedback" (a.k.a., "best-expert") version.
Title: Global sensitivity analysis for models with spatially dependent outputs
Abstract: The global sensitivity analysis of a complex numerical model often calls for the estimation of variance-based importance measures, named Sobol' indices. Metamodel-based techniques have been developed in order to replace the cpu time-expensive computer code with an inexpensive mathematical function, which predicts the computer code output. The common metamodel-based sensitivity analysis methods are well-suited for computer codes with scalar outputs. However, in the environmental domain, as in many areas of application, the numerical model outputs are often spatial maps, which may also vary with time. In this paper, we introduce an innovative method to obtain a spatial map of Sobol' indices with a minimal number of numerical model computations. It is based upon the functional decomposition of the spatial output onto a wavelet basis and the metamodeling of the wavelet coefficients by the Gaussian process. An analytical example is presented to clarify the various steps of our methodology. This technique is then applied to a real hydrogeological case: for each model input variable, a spatial map of Sobol' indices is thus obtained.
Title: The relation between Pearson's correlation coefficient r and Salton's cosine measure
Abstract: The relation between Pearson's correlation coefficient and Salton's cosine measure is revealed based on the different possible values of the division of the L1-norm and the L2-norm of a vector. These different values yield a sheaf of increasingly straight lines which form together a cloud of points, being the investigated relation. The theoretical results are tested against the author co-citation relations among 24 informetricians for whom two matrices can be constructed, based on co-citations: the asymmetric occurrence matrix and the symmetric co-citation matrix. Both examples completely confirm the theoretical results. The results enable us to specify an algorithm which provides a threshold value for the cosine above which none of the corresponding Pearson correlations would be negative. Using this threshold value can be expected to optimize the visualization of the vector space.
Title: Machine Learning: When and Where the Horses Went Astray?
Abstract: Machine Learning is usually defined as a subfield of AI, which is busy with information extraction from raw data sets. Despite of its common acceptance and widespread recognition, this definition is wrong and groundless. Meaningful information does not belong to the data that bear it. It belongs to the observers of the data and it is a shared agreement and a convention among them. Therefore, this private information cannot be extracted from the data by any means. Therefore, all further attempts of Machine Learning apologists to justify their funny business are inappropriate.
Title: Belief Propagation and Loop Calculus for the Permanent of a Non-Negative Matrix
Abstract: We consider computation of permanent of a positive $(N\times N)$ non-negative matrix, $P=(P_i^j|i,j=1,\cdots,N)$, or equivalently the problem of weighted counting of the perfect matchings over the complete bipartite graph $K_N,N$. The problem is known to be of likely exponential complexity. Stated as the partition function $Z$ of a graphical model, the problem allows exact Loop Calculus representation [Chertkov, Chernyak '06] in terms of an interior minimum of the Bethe Free Energy functional over non-integer doubly stochastic matrix of marginal beliefs, $\beta=(\beta_i^j|i,j=1,\cdots,N)$, also correspondent to a fixed point of the iterative message-passing algorithm of the Belief Propagation (BP) type. Our main result is an explicit expression of the exact partition function (permanent) in terms of the matrix of BP marginals, $\beta$, as $Z=(P)=Z_BP (\beta_i^j(1-\beta_i^j))/\prod_i,j(1-\beta_i^j)$, where $Z_BP$ is the BP expression for the permanent stated explicitly in terms if $\beta$. We give two derivations of the formula, a direct one based on the Bethe Free Energy and an alternative one combining the Ihara graph-$\zeta$ function and the Loop Calculus approaches. Assuming that the matrix $\beta$ of the Belief Propagation marginals is calculated, we provide two lower bounds and one upper-bound to estimate the multiplicative term. Two complementary lower bounds are based on the Gurvits-van der Waerden theorem and on a relation between the modified permanent and determinant respectively.
Title: Co-word Analysis using the Chinese Character Set
Abstract: Until recently, Chinese texts could not be studied using co-word analysis because the words are not separated by spaces in Chinese (and Japanese). A word can be composed of one or more characters. The online availability of programs that separate Chinese texts makes it possible to analyze them using semantic maps. Chinese characters contain not only information, but also meaning. This may enhance the readability of semantic maps. In this study, we analyze 58 words which occur ten or more times in the 1652 journal titles of the China Scientific and Technical Papers and Citations Database. The word occurrence matrix is visualized and factor-analyzed.
Title: Variable Second-Order Inclusion Probabilities as a Tool to Predict the Sampling Variance
Abstract: A generalization of Gy's theory for the variance of the fundamental sampling error is reviewed. Practical situations where the generalized model potentially leads to more accurate variance estimates are identified as: clustering of particles, differences in densities or sizes of the particles or repulsive inter-particle forces. Two general approaches for estimating an input parameter for the generalized model are discussed. The first approach consists of modelling based on physical properties of particles such as size, density and electrostatic forces between particles. The second approach uses image analysis of actual samples. Further research into both methods is proposed and a suggestion is made to use line-intercept sampling combined with Markov Chain modelling in the second approach. It is concluded that although, at the moment, it is too early for a routine application of the generalized theory, the generalization has the potential of providing more accurate variance estimates than are possible in the theory of Gy. Therefore, further research into the development and expansion of the generalized theory is worthwhile.
Title: A Discourse-based Approach in Text-based Machine Translation
Abstract: This paper presents a theoretical research based approach to ellipsis resolution in machine translation. The formula of discourse is applied in order to resolve ellipses. The validity of the discourse formula is analyzed by applying it to the real world text, i.e., newspaper fragments. The source text is converted into mono-sentential discourses where complex discourses require further dissection either directly into primitive discourses or first into compound discourses and later into primitive ones. The procedure of dissection needs further improvement, i.e., discovering as many primitive discourse forms as possible. An attempt has been made to investigate new primitive discourses or patterns from the given text.
Title: Resolution of Unidentified Words in Machine Translation
Abstract: This paper presents a mechanism of resolving unidentified lexical units in Text-based Machine Translation (TBMT). In a Machine Translation (MT) system it is unlikely to have a complete lexicon and hence there is intense need of a new mechanism to handle the problem of unidentified words. These unknown words could be abbreviations, names, acronyms and newly introduced terms. We have proposed an algorithm for the resolution of the unidentified words. This algorithm takes discourse unit (primitive discourse) as a unit of analysis and provides real time updates to the lexicon. We have manually applied the algorithm to news paper fragments. Along with anaphora and cataphora resolution, many unknown words especially names and abbreviations were updated to the lexicon.
Title: Manipulating Tournaments in Cup and Round Robin Competitions
Abstract: In sports competitions, teams can manipulate the result by, for instance, throwing games. We show that we can decide how to manipulate round robin and cup competitions, two of the most popular types of sporting competitions in polynomial time. In addition, we show that finding the minimal number of games that need to be thrown to manipulate the result can also be determined in polynomial time. Finally, we show that there are several different variations of standard cup competitions where manipulation remains polynomial.
Title: Industrial-Strength Formally Certified SAT Solving
Abstract: Boolean Satisfiability (SAT) solvers are now routinely used in the verification of large industrial problems. However, their application in safety-critical domains such as the railways, avionics, and automotive industries requires some form of assurance for the results, as the solvers can (and sometimes do) have bugs. Unfortunately, the complexity of modern, highly optimized SAT solvers renders impractical the development of direct formal proofs of their correctness. This paper presents an alternative approach where an untrusted, industrial-strength, SAT solver is plugged into a trusted, formally certified, SAT proof checker to provide industrial-strength certified SAT solving. The key novelties and characteristics of our approach are (i) that the checker is automatically extracted from the formal development, (ii), that the combined system can be used as a standalone executable program independent of any supporting theorem prover, and (iii) that the checker certifies any SAT solver respecting the agreed format for satisfiability and unsatisfiability claims. The core of the system is a certified checker for unsatisfiability claims that is formally designed and verified in Coq. We present its formal design and outline the correctness proofs. The actual standalone checker is automatically extracted from the the Coq development. An evaluation of the certified checker on a representative set of industrial benchmarks from the SAT Race Competition shows that, albeit it is slower than uncertified SAT checkers, it is significantly faster than certified checkers implemented on top of an interactive theorem prover.
Title: Multi-Objective Optimisation Method for Posture Prediction and Analysis with Consideration of Fatigue Effect and its Application Case
Abstract: Automation technique has been widely used in manufacturing industry, but there are still manual handling operations required in assembly and maintenance work in industry. Inappropriate posture and physical fatigue might result in musculoskeletal disorders (MSDs) in such physical jobs. In ergonomics and occupational biomechanics, virtual human modelling techniques have been employed to design and optimize the manual operations in design stage so as to avoid or decrease potential MSD risks. In these methods, physical fatigue is only considered as minimizing the muscle or joint stress, and the fatigue effect along time for the posture is not considered enough. In this study, based on the existing methods and multiple objective optimisation method (MOO), a new posture prediction and analysis method is proposed for predicting the optimal posture and evaluating the physical fatigue in the manual handling operation. The posture prediction and analysis problem is mathematically described and a special application case is demonstrated for analyzing a drilling assembly operation in European Aeronautic Defence & Space Company (EADS) in this paper.
Title: Simulation-based model selection for dynamical systems in systems and population biology
Abstract: Computer simulations have become an important tool across the biomedical sciences and beyond. For many important problems several different models or hypotheses exist and choosing which one best describes reality or observed data is not straightforward. We therefore require suitable statistical tools that allow us to choose rationally between different mechanistic models of e.g. signal transduction or gene regulation networks. This is particularly challenging in systems biology where only a small number of molecular species can be assayed at any given time and all measurements are subject to measurement uncertainty. Here we develop such a model selection framework based on approximate Bayesian computation and employing sequential Monte Carlo sampling. We show that our approach can be applied across a wide range of biological scenarios, and we illustrate its use on real data describing influenza dynamics and the JAK-STAT signalling pathway. Bayesian model selection strikes a balance between the complexity of the simulation models and their ability to describe observed data. The present approach enables us to employ the whole formal apparatus to any system that can be (efficiently) simulated, even when exact likelihoods are computationally intractable.
Title: A Dynamic Vulnerability Map to Assess the Risk of Road Network Traffic Utilization
Abstract: Le Havre agglomeration (CODAH) includes 16 establishments classified Seveso with high threshold. In the literature, we construct vulnerability maps to help decision makers assess the risk. Such approaches remain static and do take into account the population displacement in the estimation of the vulnerability. We propose a decision making tool based on a dynamic vulnerability map to evaluate the difficulty of evacuation in the different sectors of CODAH. We use a Geographic Information system (GIS) to visualize the map which evolves with the road traffic state through a detection of communities in large graphs algorithm.
Title: Different goals in multiscale simulations and how to reach them
Abstract: In this paper we sum up our works on multiscale programs, mainly simulations. We first start with describing what multiscaling is about, how it helps perceiving signal from a background noise in a ?ow of data for example, for a direct perception by a user or for a further use by another program. We then give three examples of multiscale techniques we used in the past, maintaining a summary, using an environmental marker introducing an history in the data and finally using a knowledge on the behavior of the different scales to really handle them at the same time.
Title: Benchmarking Historical Corporate Performance
Abstract: This paper uses Bayesian tree models for statistical benchmarking in data sets with awkward marginals and complicated dependence structures. The method is applied to a very large database on corporate performance over the last four decades. The results of this study provide a formal basis for making cross-peer-group comparisons among companies in very different industries and operating environments. This is done by using models for Bayesian multiple hypothesis testing to determine which firms, if any, have systematically outperformed their peer groups over time. We conclude that systematic outperformance, while it seems to exist, is quite rare worldwide.
Title: Standards for Language Resources
Abstract: The goal of this paper is two-fold: to present an abstract data model for linguistic annotations and its implementation using XML, RDF and related standards; and to outline the work of a newly formed committee of the International Standards Organization (ISO), ISO/TC 37/SC 4 Language Resource Management, which will use this work as its starting point.
Title: On Bayesian Curve Fitting Via Auxiliary Variables
Abstract: In this article we revisit the auxiliary variable method introduced in Smith and kohn (1996) for the fitting of P-th order spline regression models with an unknown number of knot points. We introduce modifications which allow the location of knot points to be random, and we further consider an extension of the method to handle models with non-Gaussian errors. We provide a new algorithm for the MCMC sampling of such models. Simulated data examples are used to compare the performance of our method with existing ones. Finally, we make a connection with some change-point problems, and show how they can be re-parameterised to the variable selection setting.
Title: Regression on a Graph
Abstract: The `Signal plus Noise' model for nonparametric regression can be extended to the case of observations taken at the vertices of a graph. This model includes many familiar regression problems. This article discusses the use of the edges of a graph to measure roughness in penalized regression. Distance between estimate and observation is measured at every vertex in the $L_2$ norm, and roughness is penalized on every edge in the $L_1$ norm. Thus the ideas of total-variation penalization can be extended to a graph. The resulting minimization problem presents special computational challenges, so we describe a new, fast algorithm and demonstrate its use with examples. Further examples include a graphical approach that gives an improved estimate of the baseline in spectroscopic analysis, and a simulation applicable to discrete spatial variation. In our example, penalized regression outperforms kernel smoothing in terms of identifying local extreme values. In all examples we use fully automatic procedures for setting the smoothing parameters.
Title: Active Learning for Mention Detection: A Comparison of Sentence Selection Strategies
Abstract: We propose and compare various sentence selection strategies for active learning for the task of detecting mentions of entities. The best strategy employs the sum of confidences of two statistical classifiers trained on different views of the data. Our experimental results show that, compared to the random selection strategy, this strategy reduces the amount of required labeled training data by over 50% while achieving the same performance. The effect is even more significant when only named mentions are considered: the system achieves the same performance by using only 42% of the training data required by the random selection strategy.
Title: Statistical applications of the multivariate skew-normal distribution
Abstract: Azzalini & Dalla Valle (1996) have recently discussed the multivariate skew-normal distribution which extends the class of normal distributions by the addition of a shape parameter. The first part of the present paper examines further probabilistic properties of the distribution, with special emphasis on aspects of statistical relevance. Inferential and other statistical issues are discussed in the following part, with applications to some multivariate statistics problems, illustrated by numerical examples. Finally, a further extension is described which introduces a skewing factor of an elliptical density.
Title: A New Look at the Classical Entropy of Written English
Abstract: A simple method for finding the entropy and redundancy of a reasonable long sample of English text by direct computer processing and from first principles according to Shannon theory is presented. As an example, results on the entropy of the English language have been obtained based on a total of 20.3 million characters of written English, considering symbols from one to five hundred characters in length. Besides a more realistic value of the entropy of English, a new perspective on some classic entropy-related concepts is presented. This method can also be extended to other Latin languages. Some implications for practical applications such as plagiarism-detection software, and the minimum number of words that should be used in social Internet network messaging, are discussed.
Title: Distributions generated by perturbation of symmetry with emphasis on a multivariate skew $t$ distribution
Abstract: A fairly general procedure is studied to perturbate a multivariate density satisfying a weak form of multivariate symmetry, and to generate a whole set of non-symmetric densities. The approach is general enough to encompass a number of recent proposals in the literature, variously related to the skew normal distribution. The special case of skew elliptical densities is examined in detail, establishing connections with existing similar work. The final part of the paper specializes further to a form of multivariate skew $t$ density. Likelihood inference for this distribution is examined, and it is illustrated with numerical examples.
Title: High dimensional sparse covariance estimation via directed acyclic graphs
Abstract: We present a graph-based technique for estimating sparse covariance matrices and their inverses from high-dimensional data. The method is based on learning a directed acyclic graph (DAG) and estimating parameters of a multivariate Gaussian distribution based on a DAG. For inferring the underlying DAG we use the PC-algorithm and for estimating the DAG-based covariance matrix and its inverse, we use a Cholesky decomposition approach which provides a positive (semi-)definite sparse estimate. We present a consistency result in the high-dimensional framework and we compare our method with the Glasso for simulated and real data.
Title: Analytical Determination of Fractal Structure in Stochastic Time Series
Abstract: Current methods for determining whether a time series exhibits fractal structure (FS) rely on subjective assessments on estimators of the Hurst exponent (H). Here, I introduce the Bayesian Assessment of Scaling, an analytical framework for drawing objective and accurate inferences on the FS of time series. The technique exploits the scaling property of the diffusion associated to a time series. The resulting criterion is simple to compute and represents an accurate characterization of the evidence supporting different hypotheses on the scaling regime of a time series. Additionally, a closed-form Maximum Likelihood estimator of H is derived from the criterion, and this estimator outperforms the best available estimators.
Title: How Creative Should Creators Be To Optimize the Evolution of Ideas? A Computational Model
Abstract: There are both benefits and drawbacks to creativity. In a social group it is not necessary for all members to be creative to benefit from creativity; some merely imitate or enjoy the fruits of others' creative efforts. What proportion should be creative? This paper contains a very preliminary investigation of this question carried out using a computer model of cultural evolution referred to as EVOC (for EVOlution of Culture). EVOC is composed of neural network based agents that evolve fitter ideas for actions by (1) inventing new ideas through modification of existing ones, and (2) imitating neighbors' ideas. The ideal proportion with respect to fitness of ideas occurs when thirty to forty percent of the individuals is creative. When creators are inventing 50% of iterations or less, mean fitness of actions in the society is a positive function of the ratio of creators to imitators; otherwise mean fitness of actions starts to drop when the ratio of creators to imitators exceeds approximately 30%. For all levels or creativity, the diversity of ideas in a population is positively correlated with the ratio of creative agents.
Title: Emotion: Appraisal-coping model for the "Cascades" problem
Abstract: Modelling emotion has become a challenge nowadays. Therefore, several models have been produced in order to express human emotional activity. However, only a few of them are currently able to express the close relationship existing between emotion and cognition. An appraisal-coping model is presented here, with the aim to simulate the emotional impact caused by the evaluation of a particular situation (appraisal), along with the consequent cognitive reaction intended to face the situation (coping). This model is applied to the "Cascades" problem, a small arithmetical exercise designed for ten-year-old pupils. The goal is to create a model corresponding to a child's behaviour when solving the problem using his own strategies.
Title: Emotion : mod\`ele d'appraisal-coping pour le probl\`eme des Cascades
Abstract: Modeling emotion has become a challenge nowadays. Therefore, several models have been produced in order to express human emotional activity. However, only a few of them are currently able to express the close relationship existing between emotion and cognition. An appraisal-coping model is presented here, with the aim to simulate the emotional impact caused by the evaluation of a particular situation (appraisal), along with the consequent cognitive reaction intended to face the situation (coping). This model is applied to the ?Cascades? problem, a small arithmetical exercise designed for ten-year-old pupils. The goal is to create a model corresponding to a child's behavior when solving the problem using his own strategies.
Title: Local statistical modeling by cluster-weighted
Abstract: We investigate statistical properties of Cluster-Weighted Modeling, which is a framework for supervised learning originally developed in order to recreate a digital violin with traditional inputs and realistic sound. The analysis is carried out in comparison with Finite Mixtures of Regression models. Based on some geometrical arguments, we highlight that Cluster-WeightedModeling provides a quite general framework for local statistical modeling. Theoretical results are illustrated on the ground of some numerical simulations.
Title: Proceedings Fifth Workshop on Developments in Computational Models--Computational Models From Nature
Abstract: The special theme of DCM 2009, co-located with ICALP 2009, concerned Computational Models From Nature, with a particular emphasis on computational models derived from physics and biology. The intention was to bring together different approaches - in a community with a strong foundational background as proffered by the ICALP attendees - to create inspirational cross-boundary exchanges, and to lead to innovative further research. Specifically DCM 2009 sought contributions in quantum computation and information, probabilistic models, chemical, biological and bio-inspired ones, including spatial models, growth models and models of self-assembly. Contributions putting to the test logical or algorithmic aspects of computing (e.g., continuous computing with dynamical systems, or solid state computing models) were also very much welcomed.
Title: Neural Networks for Dynamic Shortest Path Routing Problems - A Survey
Abstract: This paper reviews the overview of the dynamic shortest path routing problem and the various neural networks to solve it. Different shortest path optimization problems can be solved by using various neural networks algorithms. The routing in packet switched multi-hop networks can be described as a classical combinatorial optimization problem i.e. a shortest path routing problem in graphs. The survey shows that the neural networks are the best candidates for the optimization of dynamic shortest path routing problems due to their fastness in computation comparing to other softcomputing and metaheuristics algorithms
Title: A Hierarchical Bayesian Model for Frame Representation
Abstract: In many signal processing problems, it may be fruitful to represent the signal under study in a frame. If a probabilistic approach is adopted, it becomes then necessary to estimate the hyper-parameters characterizing the probability distribution of the frame coefficients. This problem is difficult since in general the frame synthesis operator is not bijective. Consequently, the frame coefficients are not directly observable. This paper introduces a hierarchical Bayesian model for frame representation. The posterior distribution of the frame coefficients and model hyper-parameters is derived. Hybrid Markov Chain Monte Carlo algorithms are subsequently proposed to sample from this posterior distribution. The generated samples are then exploited to estimate the hyper-parameters and the frame coefficients of the target signal. Validation experiments show that the proposed algorithms provide an accurate estimation of the frame coefficients and hyper-parameters. Application to practical problems of image denoising show the impact of the resulting Bayesian estimation on the recovered signal quality.