text
stringlengths
0
4.09k
Abstract: Written language is a complex communication signal capable of conveying information encoded in the form of ordered sequences of words. Beyond the local order ruled by grammar, semantic and thematic structures affect long-range patterns in word usage. Here, we show that a direct application of information theory quantifies the relationship between the statistical distribution of words and the semantic content of the text. We show that there is a characteristic scale, roughly around a few thousand words, which establishes the typical size of the most informative segments in written language. Moreover, we find that the words whose contributions to the overall information is larger, are the ones more closely associated with the main subjects and topics of the text. This scenario can be explained by a model of word usage that assumes that words are distributed along the text in domains of a characteristic size where their frequency is higher than elsewhere. Our conclusions are based on the analysis of a large database of written language, diverse in subjects and styles, and thus are likely to be applicable to general language sequences encoding complex information.
Title: Fast search for Dirichlet process mixture models
Abstract: Dirichlet process (DP) mixture models provide a flexible Bayesian framework for density estimation. Unfortunately, their flexibility comes at a cost: inference in DP mixture models is computationally expensive, even when conjugate distributions are used. In the common case when one seeks only a maximum a posteriori assignment of data points to clusters, we show that search algorithms provide a practical alternative to expensive MCMC and variational techniques. When a true posterior sample is desired, the solution found by search can serve as a good initializer for MCMC. Experimental results show that using these techniques is it possible to apply DP mixture models to very large data sets.
Title: Bayesian Query-Focused Summarization
Abstract: We present BayeSum (for ``Bayesian summarization''), a model for sentence extraction in query-focused summarization. BayeSum leverages the common case in which multiple documents are relevant to a single query. Using these documents as reinforcement for query terms, BayeSum is not afflicted by the paucity of information in short queries. We show that approximate inference in BayeSum is possible on large data sets and results in a state-of-the-art summarization system. Furthermore, we show how BayeSum can be understood as a justified query expansion technique in the language modeling for IR framework.
Title: Frustratingly Easy Domain Adaptation
Abstract: We describe an approach to domain adaptation that is appropriate exactly in the case when one has enough ``target'' data to do slightly better than just using only ``source'' data. Our approach is incredibly simple, easy to implement as a preprocessing step (10 lines of Perl!) and outperforms state-of-the-art approaches on a range of datasets. Moreover, it is trivially extended to a multi-domain adaptation problem, where one has data from a variety of different domains.
Title: Improvement of random LHD for high dimensions
Abstract: Designs of experiments for multivariate case are reviewed. Fast algorithm of construction of good Latin hypercube designs is developed.
Title: Information geometry for testing pseudorandom number generators
Abstract: The information geometry of the 2-manifold of gamma probability density functions provides a framework in which pseudorandom number generators may be evaluated using a neighbourhood of the curve of exponential density functions. The process is illustrated using the pseudorandom number generator in Mathematica. This methodology may be useful to add to the current family of test procedures in real applications to finite sampling data.
Title: An Evolved Neural Controller for Bipdedal Walking with Dynamic Balance
Abstract: We successfully evolved a neural network controller that produces dynamic walking in a simulated bipedal robot with compliant actuators, a difficult control problem. The evolutionary evaluation uses a detailed software simulation of a physical robot. We describe: 1) a novel theoretical method to encourage populations to evolve "around" local optima, which employs multiple demes and fitness functions of progressively increasing difficulty, and 2) the novel genetic representation of the neural controller.
Title: Learning Equilibria in Games by Stochastic Distributed Algorithms
Abstract: We consider a class of fully stochastic and fully distributed algorithms, that we prove to learn equilibria in games. Indeed, we consider a family of stochastic distributed dynamics that we prove to converge weakly (in the sense of weak convergence for probabilistic processes) towards their mean-field limit, i.e an ordinary differential equation (ODE) in the general case. We focus then on a class of stochastic dynamics where this ODE turns out to be related to multipopulation replicator dynamics. Using facts known about convergence of this ODE, we discuss the convergence of the initial stochastic dynamics: For general games, there might be non-convergence, but when convergence of the ODE holds, considered stochastic algorithms converge towards Nash equilibria. For games admitting Lyapunov functions, that we call Lyapunov games, the stochastic dynamics converge. We prove that any ordinal potential game, and hence any potential game is a Lyapunov game, with a multiaffine Lyapunov function. For Lyapunov games with a multiaffine Lyapunov function, we prove that this Lyapunov function is a super-martingale over the stochastic dynamics. This leads a way to provide bounds on their time of convergence by martingale arguments. This applies in particular for many classes of games that have been considered in literature, including several load balancing game scenarios and congestion games.
Title: Modeling self-organizing traffic lights with elementary cellular automata
Abstract: There have been several highway traffic models proposed based on cellular automata. The simplest one is elementary cellular automaton rule 184. We extend this model to city traffic with cellular automata coupled at intersections using only rules 184, 252, and 136. The simplicity of the model offers a clear understanding of the main properties of city traffic and its phase transitions. We use the proposed model to compare two methods for coordinating traffic lights: a green-wave method that tries to optimize phases according to expected flows and a self-organizing method that adapts to the current traffic conditions. The self-organizing method delivers considerable improvements over the green-wave method. For low densities, the self-organizing method promotes the formation and coordination of platoons that flow freely in four directions, i.e. with a maximum velocity and no stops. For medium densities, the method allows a constant usage of the intersections, exploiting their maximum flux capacity. For high densities, the method prevents gridlocks and promotes the formation and coordination of "free-spaces" that flow in the opposite direction of traffic.
Title: Statistical estimation requires unbounded memory
Abstract: We investigate the existence of bounded-memory consistent estimators of various statistical functionals. This question is resolved in the negative in a rather strong sense. We propose various bounded-memory approximations, using techniques from automata theory and stochastic processes. Some questions of potential interest are raised for future work.
Title: Multiresolution Elastic Medical Image Registration in Standard Intensity Scale
Abstract: Medical image registration is a difficult problem. Not only a registration algorithm needs to capture both large and small scale image deformations, it also has to deal with global and local image intensity variations. In this paper we describe a new multiresolution elastic image registration method that challenges these difficulties in image registration. To capture large and small scale image deformations, we use both global and local affine transformation algorithms. To address global and local image intensity variations, we apply an image intensity standardization algorithm to correct image intensity variations. This transforms image intensities into a standard intensity scale, which allows highly accurate registration of medical images.
Title: An Augmented Lagrangian Approach for Sparse Principal Component Analysis
Abstract: Principal component analysis (PCA) is a widely used technique for data analysis and dimension reduction with numerous applications in science and engineering. However, the standard PCA suffers from the fact that the principal components (PCs) are usually linear combinations of all the original variables, and it is thus often difficult to interpret the PCs. To alleviate this drawback, various sparse PCA approaches were proposed in literature [15, 6, 17, 28, 8, 25, 18, 7, 16]. Despite success in achieving sparsity, some important properties enjoyed by the standard PCA are lost in these methods such as uncorrelation of PCs and orthogonality of loading vectors. Also, the total explained variance that they attempt to maximize can be too optimistic. In this paper we propose a new formulation for sparse PCA, aiming at finding sparse and nearly uncorrelated PCs with orthogonal loading vectors while explaining as much of the total variance as possible. We also develop a novel augmented Lagrangian method for solving a class of nonsmooth constrained optimization problems, which is well suited for our formulation of sparse PCA. We show that it converges to a feasible point, and moreover under some regularity assumptions, it converges to a stationary point. Additionally, we propose two nonmonotone gradient methods for solving the augmented Lagrangian subproblems, and establish their global and local convergence. Finally, we compare our sparse PCA approach with several existing methods on synthetic, random, and real data, respectively. The computational results demonstrate that the sparse PCs produced by our approach substantially outperform those by other methods in terms of total explained variance, correlation of PCs, and orthogonality of loading vectors.
Title: Shrinkage regression for multivariate inference with missing data, and an application to portfolio balancing
Abstract: Portfolio balancing requires estimates of covariance between asset returns. Returns data have histories which greatly vary in length, since assets begin public trading at different times. This can lead to a huge amount of missing data--too much for the conventional imputation-based approach. Fortunately, a well-known factorization of the MVN likelihood under the prevailing historical missingness pattern leads to a simple algorithm of OLS regressions that is much more reliable. When there are more assets than returns, however, OLS becomes unstable. Gramacy, et al. (2008), showed how classical shrinkage regression may be used instead, thus extending the state of the art to much bigger asset collections, with further accuracy and interpretation advantages. In this paper, we detail a fully Bayesian hierarchical formulation that extends the framework further by allowing for heavy-tailed errors, relaxing the historical missingness assumption, and accounting for estimation risk. We illustrate how this approach compares favorably to the classical one using synthetic data and an investment exercise with real returns. An accompanying R package is on CRAN.
Title: Network-aware Adaptation with Real-Time Channel Statistics for Wireless LAN Multimedia Transmissions in the Digital Home
Abstract: This paper suggests the use of intelligent network-aware processing agents in wireless local area network drivers to generate metrics for bandwidth estimation based on real-time channel statistics to enable wireless multimedia application adaptation. Various configurations in the wireless digital home are studied and the experimental results with performance variations are presented.
Title: Sparsistent Estimation of Time-Varying Discrete Markov Random Fields
Abstract: Network models have been popular for modeling and representing complex relationships and dependencies between observed variables. When data comes from a dynamic stochastic process, a single static network model cannot adequately capture transient dependencies, such as, gene regulatory dependencies throughout a developmental cycle of an organism. Kolar et al (2010b) proposed a method based on kernel-smoothing l1-penalized logistic regression for estimating time-varying networks from nodal observations collected from a time-series of observational data. In this paper, we establish conditions under which the proposed method consistently recovers the structure of a time-varying network. This work complements previous empirical findings by providing sound theoretical guarantees for the proposed estimation procedure. For completeness, we include numerical simulations in the paper.
Title: Pattern Based Term Extraction Using ACABIT System
Abstract: In this paper, we propose a pattern-based term extraction approach for Japanese, applying ACABIT system originally developed for French. The proposed approach evaluates termhood using morphological patterns of basic terms and term variants. After extracting term candidates, ACABIT system filters out non-terms from the candidates based on log-likelihood. This approach is suitable for Japanese term extraction because most of Japanese terms are compound nouns or simple phrasal patterns.
Title: Why we (usually) don't have to worry about multiple comparisons
Abstract: Applied researchers often find themselves making statistical inferences in settings that would seem to require multiple comparisons adjustments. We challenge the Type I error paradigm that underlies these corrections. Moreover we posit that the problem of multiple comparisons can disappear entirely when viewed from a hierarchical Bayesian perspective. We propose building multilevel models in the settings where multiple comparisons arise. Multilevel models perform partial pooling (shifting estimates toward each other), whereas classical procedures typically keep the centers of intervals stationary, adjusting for multiple comparisons by making the intervals wider (or, equivalently, adjusting the $p$-values corresponding to intervals of fixed width). Thus, multilevel models address the multiple comparisons problem and also yield more efficient estimates, especially in settings with low group-level variation, which is where multiple comparisons are a particular concern.
Title: Thoughts on new statistical procedures for age-period-cohort analyses
Abstract: Age-period-cohort analysis is mathematically intractable because of fundamental nonidentifiability of linear trends. However, some understanding can be gained in the context of individual problems.
Title: On Cyclic and Nearly Cyclic Multiagent Interactions in the Plane
Abstract: We discuss certain types of cyclic and nearly cyclic interactions among N "point"-agents in the plane, leading to formations of interesting limiting geometric configurations. Cyclic pursuit and local averaging interactions have been analyzed in the context of multi-agent gathering. In this paper, we consider some nearly cyclic interactions that break symmetry leading to factor circulants rather than circulant interaction matrices.
Title: Bayesian methods to overcome the winner's curse in genetic studies
Abstract: Parameter estimates for associated genetic variants, report ed in the initial discovery samples, are often grossly inflated compared to the values observed in the follow-up replication samples. This type of bias is a consequence of the sequential procedure in which the estimated effect of an associated genetic marker must first pass a stringent significance threshold. We propose a hierarchical Bayes method in which a spike-and-slab prior is used to account for the possibility that the significant test result may be due to chance. We examine the robustness of the method using different priors corresponding to different degrees of confidence in the testing results and propose a Bayesian model averaging procedure to combine estimates produced by different models. The Bayesian estimators yield smaller variance compared to the conditional likelihood estimator and outperform the latter in studies with low power. We investigate the performance of the method with simulations and applications to four real data examples.
Title: Modelling Concurrent Behaviors in the Process Specification Language
Abstract: In this paper, we propose a first-order ontology for generalized stratified order structure. We then classify the models of the theory using model-theoretic techniques. An ontology mapping from this ontology to the core theory of Process Specification Language is also discussed.
Title: Distribution Fitting 1. Parameters Estimation under Assumption of Agreement between Observation and Model
Abstract: The methods for parameter estimation under assumption of agreement between observation and model are reviewed. The distribution parameters are obtained for one set of experimental data by using different estimation methods under assumption of Gauss-Laplace theoretical distribution. The results are presented and discussed.
Title: Distribution Fitting 2. Pearson-Fisher, Kolmogorov-Smirnov, Anderson-Darling, Wilks-Shapiro, Cramer-von-Misses and Jarque-Bera statistics
Abstract: The methods measuring the departure between observation and the model were reviewed. The following statistics were applied on two experimental data sets: Chi-Squared, Kolmogorov-Smirnov, Anderson-Darling, Wilks-Shapiro, and Jarque-Bera. Both investigated sets proved not to be normal distributed. The Grubbs test identified one outlier and after its removal the normality of the set of 205 chemical active compounds was accepted. The second data set proved not to have any outliers. Kolmogorov-Smirnov statistic is less affected by the existence of outliers (positive variation expressed as percentage smaller than 2). The outliers bring to Kolmogorov-Smirnov statistic errors of type II and to the Anderson-Darling statistic errors of type I.
Title: The Single Machine Total Weighted Tardiness Problem - Is it (for Metaheuristics) a Solved Problem ?
Abstract: The article presents a study of rather simple local search heuristics for the single machine total weighted tardiness problem (SMTWTP), namely hillclimbing and Variable Neighborhood Search. In particular, we revisit these approaches for the SMTWTP as there appears to be a lack of appropriate/challenging benchmark instances in this case. The obtained results are impressive indeed. Only few instances remain unsolved, and even those are approximated within 1% of the optimal/best known solutions. Our experiments support the claim that metaheuristics for the SMTWTP are very likely to lead to good results, and that, before refining search strategies, more work must be done with regard to the proposition of benchmark data. Some recommendations for the construction of such data sets are derived from our investigations.
Title: Improvements for multi-objective flow shop scheduling by Pareto Iterated Local Search
Abstract: The article describes the proposition and application of a local search metaheuristic for multi-objective optimization problems. It is based on two main principles of heuristic search, intensification through variable neighborhoods, and diversification through perturbations and successive iterations in favorable regions of the search space. The concept is successfully tested on permutation flow shop scheduling problems under multiple objectives and compared to other local search approaches. While the obtained results are encouraging in terms of their quality, another positive attribute of the approach is its simplicity as it does require the setting of only very few parameters.
Title: Graph Theory and Optimization Problems for Very Large Networks
Abstract: Graph theory provides a primary tool for analyzing and designing computer communication networks. In the past few decades, Graph theory has been used to study various types of networks, including the Internet, wide Area Networks, Local Area Networks, and networking protocols such as border Gateway Protocol, Open shortest Path Protocol, and Networking Networks. In this paper, we present some key graph theory concepts used to represent different types of networks. Then we describe how networks are modeled to investigate problems related to network protocols. Finally, we present some of the tools used to generate graph for representing practical networks.
Title: Checking election outcome accuracy Post-election audit sampling
Abstract: This article * provides an overview of post-election audit sampling research and compares various approaches to calculating post-election audit sample sizes, focusing on risklimiting audits, * discusses fundamental concepts common to all risk-limiting post-election audits, presenting new margin error bounds, sampling weights and sampling probabilities that improve upon existing approaches and work for any size audit unit and for single or multi-winner election contests, * provides two new simple formulas for estimating post-election audit sample sizes in cases when detailed data, expertise, or tools are not available, * summarizes four improved methods for calculating risk-limiting election audit sample sizes, showing how to apply precise margin error bounds to improve the accuracy and efficacy of existing methods, and * discusses sampling mistakes that reduce post-election audit effectiveness.
Title: Registration of Standardized Histological Images in Feature Space
Abstract: In this paper, we propose three novel and important methods for the registration of histological images for 3D reconstruction. First, possible intensity variations and nonstandardness in images are corrected by an intensity standardization process which maps the image scale into a standard scale where the similar intensities correspond to similar tissues meaning. Second, 2D histological images are mapped into a feature space where continuous variables are used as high confidence image features for accurate registration. Third, we propose an automatic best reference slice selection algorithm that improves reconstruction quality based on both image entropy and mean square error of the registration process. We demonstrate that the choice of reference slice has a significant impact on registration error, standardization, feature space and entropy information. After 2D histological slices are registered through an affine transformation with respect to an automatically chosen reference, the 3D volume is reconstructed by co-registering 2D slices elastically.
Title: Fully Automatic 3D Reconstruction of Histological Images
Abstract: In this paper, we propose a computational framework for 3D volume reconstruction from 2D histological slices using registration algorithms in feature space. To improve the quality of reconstructed 3D volume, first, intensity variations in images are corrected by an intensity standardization process which maps image intensity scale to a standard scale where similar intensities correspond to similar tissues. Second, a subvolume approach is proposed for 3D reconstruction by dividing standardized slices into groups. Third, in order to improve the quality of the reconstruction process, an automatic best reference slice selection algorithm is developed based on an iterative assessment of image entropy and mean square error of the registration process. Finally, we demonstrate that the choice of the reference slice has a significant impact on registration quality and subsequent 3D reconstruction.
Title: Parallel AdaBoost Algorithm for Gabor Wavelet Selection in Face Recognition
Abstract: In this paper, the problem of automatic Gabor wavelet selection for face recognition is tackled by introducing an automatic algorithm based on Parallel AdaBoosting method. Incorporating mutual information into the algorithm leads to the selection procedure not only based on classification accuracy but also on efficiency. Effective image features are selected by using properly chosen Gabor wavelets optimised with Parallel AdaBoost method and mutual information to get high recognition rates with low computational cost. Experiments are conducted using the well-known FERET face database. In proposed framework, memory and computation costs are reduced significantly and high classification accuracy is obtained.
Title: Inter Genre Similarity Modelling For Automatic Music Genre Classification
Abstract: Music genre classification is an essential tool for music information retrieval systems and it has been finding critical applications in various media platforms. Two important problems of the automatic music genre classification are feature extraction and classifier design. This paper investigates inter-genre similarity modelling (IGS) to improve the performance of automatic music genre classification. Inter-genre similarity information is extracted over the mis-classified feature population. Once the inter-genre similarity is modelled, elimination of the inter-genre similarity reduces the inter-genre confusion and improves the identification rates. Inter-genre similarity modelling is further improved with iterative IGS modelling(IIGS) and score modelling for IGS elimination(SMIGS). Experimental results with promising classification improvements are provided.
Title: Neural Modeling and Control of Diesel Engine with Pollution Constraints
Abstract: The paper describes a neural approach for modelling and control of a turbocharged Diesel engine. A neural model, whose structure is mainly based on some physical equations describing the engine behaviour, is built for the rotation speed and the exhaust gas opacity. The model is composed of three interconnected neural submodels, each of them constituting a nonlinear multi-input single-output error model. The structural identification and the parameter estimation from data gathered on a real engine are described. The neural direct model is then used to determine a neural controller of the engine, in a specialized training scheme minimising a multivariable criterion. Simulations show the effect of the pollution constraint weighting on a trajectory tracking of the engine speed. Neural networks, which are flexible and parsimonious nonlinear black-box models, with universal approximation capabilities, can accurately describe or control complex nonlinear systems, with little a priori theoretical knowledge. The presented work extends optimal neuro-control to the multivariable case and shows the flexibility of neural optimisers. Considering the preliminary results, it appears that neural networks can be used as embedded models for engine control, to satisfy the more and more restricting pollutant emission legislation. Particularly, they are able to model nonlinear dynamics and outperform during transients the control schemes based on static mappings.
Title: A network-based approach for surveillance of occupational health exposures
Abstract: In the context of surveillance of health problems, the research carried out by the French national occupational disease surveillance and prevention network (R\'eseau National de Vigilance et de Pr\'evention des Pathologies Professionnelles, RNV3P) aims to develop, among other approaches, methods of surveillance, statistical analysis and modeling in order to study the structure and change over time of relationships between disease and exposure, and to detect emerging disease-exposure associations. In this perspective, this paper aims to present the concept of the "exposome" and to explain on what bases it is constructed. The exposome is defined as a network of relationships between occupational health problems that have in common one or several elements of occupational exposure (exposures, occupation and/or activity sector). The paper also aims to outline its potential for the study and programmed surveillance of composite disease-occupational exposure associations. We illustrate this approach by applying it to a sample from the RNV3P data, taking malignant tumours and focusing on the subgroup of non-Hodgkin lymphomas.
Title: Occupational Health Problem Network : the Exposome
Abstract: We present a thinking on the concept of relational networks applied to the french national occupational disease surveillance and prevention network (R\'eseau National de Vigilance et de Pr\'evention des Pathologies Professionnelles, RNV3P). This approach consists in searching common exposures to occupational health problems.