text
stringlengths
0
4.09k
Abstract: Researchers often calculate ratios of measured quantities. Specifying confidence limits for ratios is difficult and the appropriate methods are often unknown. Appropriate methods are described (Fieller, Taylor, special bootstrap methods). For the Fieller method a simple geometrical interpretation is given. Monte Carlo simulations show when these methods are appropriate and that the most frequently used methods (index method and zero-variance method) can lead to large liberal deviations from the desired confidence level. It is discussed when we can use standard regression or measurement error models and when we have to resort to specific models for heteroscedastic data. Finally, an old warning is repeated that we should be aware of the problems of spurious correlations if we use ratios.
Title: An Affinity Propagation Based method for Vector Quantization Codebook Design
Abstract: In this paper, we firstly modify a parameter in affinity propagation (AP) to improve its convergence ability, and then, we apply it to vector quantization (VQ) codebook design problem. In order to improve the quality of the resulted codebook, we combine the improved AP (IAP) with the conventional LBG algorithm to generate an effective algorithm call IAP-LBG. According to the experimental results, the proposed method not only enhances the convergence abilities but also is capable of providing higher-quality codebooks than conventional LBG method.
Title: Association Rules in the Relational Calculus
Abstract: One of the most utilized data mining tasks is the search for association rules. Association rules represent significant relationships between items in transactions. We extend the concept of association rule to represent a much broader class of associations, which we refer to as \emphentity-relationship rules. Semantically, entity-relationship rules express associations between properties of related objects. Syntactically, these rules are based on a broad subclass of safe domain relational calculus queries. We propose a new definition of support and confidence for entity-relationship rules and for the frequency of entity-relationship queries. We prove that the definition of frequency satisfies standard probability axioms and the Apriori property.
Title: A System for Predicting Subcellular Localization of Yeast Genome Using Neural Network
Abstract: The subcellular location of a protein can provide valuable information about its function. With the rapid increase of sequenced genomic data, the need for an automated and accurate tool to predict subcellular localization becomes increasingly important. Many efforts have been made to predict protein subcellular localization. This paper aims to merge the artificial neural networks and bioinformatics to predict the location of protein in yeast genome. We introduce a new subcellular prediction method based on a backpropagation neural network. The results show that the prediction within an error limit of 5 to 10 percentage can be achieved with the system.
Title: Comparison and Combination of State-of-the-art Techniques for Handwritten Character Recognition: Topping the MNIST Benchmark
Abstract: Although the recognition of isolated handwritten digits has been a research topic for many years, it continues to be of interest for the research community and for commercial applications. We show that despite the maturity of the field, different approaches still deliver results that vary enough to allow improvements by using their combination. We do so by choosing four well-motivated state-of-the-art recognition systems for which results on the standard MNIST benchmark are available. When comparing the errors made, we observe that the errors made differ between all four systems, suggesting the use of classifier combination. We then determine the error rate of a hypothetical system that combines the output of the four systems. The result obtained in this manner is an error rate of 0.35% on the MNIST data, the best result published so far. We furthermore discuss the statistical significance of the combined result and of the results of the individual classifiers.
Title: The structure of verbal sequences analyzed with unsupervised learning techniques
Abstract: Data mining allows the exploration of sequences of phenomena, whereas one usually tends to focus on isolated phenomena or on the relation between two phenomena. It offers invaluable tools for theoretical analyses and exploration of the structure of sentences, texts, dialogues, and speech. We report here the results of an attempt at using it for inspecting sequences of verbs from French accounts of road accidents. This analysis comes from an original approach of unsupervised training allowing the discovery of the structure of sequential data. The entries of the analyzer were only made of the verbs appearing in the sentences. It provided a classification of the links between two successive verbs into four distinct clusters, allowing thus text segmentation. We give here an interpretation of these clusters by applying a statistical analysis to independent semantic annotations.
Title: Geometric Analogue of Holographic Reduced Representation
Abstract: Holographic reduced representations (HRR) are based on superpositions of convolution-bound $n$-tuples, but the $n$-tuples cannot be regarded as vectors since the formalism is basis dependent. This is why HRR cannot be associated with geometric structures. Replacing convolutions by geometric products one arrives at reduced representations analogous to HRR but interpretable in terms of geometry. Variable bindings occurring in both HRR and its geometric analogue mathematically correspond to two different representations of $Z_2\times...\times Z_2$ (the additive group of binary $n$-tuples with addition modulo 2). As opposed to standard HRR, variable binding performed by means of geometric product allows for computing exact inverses of all nonzero vectors, a procedure even simpler than approximate inverses employed in HRR. The formal structure of the new reduced representation is analogous to cartoon computation, a geometric analogue of quantum computation.
Title: Linguistic Information Energy
Abstract: In this treatment a text is considered to be a series of word impulses which are read at a constant rate. The brain then assembles these units of information into higher units of meaning. A classical systems approach is used to model an initial part of this assembly process. The concepts of linguistic system response, information energy, and ordering energy are defined and analyzed. Finally, as a demonstration, information energy is used to estimate the publication dates of a series of texts and the similarity of a set of texts.
Title: Effective linkage learning using low-order statistics and clustering
Abstract: The adoption of probabilistic models for the best individuals found so far is a powerful approach for evolutionary computation. Increasingly more complex models have been used by estimation of distribution algorithms (EDAs), which often result better effectiveness on finding the global optima for hard optimization problems. Supervised and unsupervised learning of Bayesian networks are very effective options, since those models are able to capture interactions of high order among the variables of a problem. Diversity preservation, through niching techniques, has also shown to be very important to allow the identification of the problem structure as much as for keeping several global optima. Recently, clustering was evaluated as an effective niching technique for EDAs, but the performance of simpler low-order EDAs was not shown to be much improved by clustering, except for some simple multimodal problems. This work proposes and evaluates a combination operator guided by a measure from information theory which allows a clustered low-order EDA to effectively solve a comprehensive range of benchmark optimization problems.
Title: Consistency of trace norm minimization
Abstract: Regularization by the sum of singular values, also referred to as the trace norm, is a popular technique for estimating low rank rectangular matrices. In this paper, we extend some of the consistency results of the Lasso to provide necessary and sufficient conditions for rank consistency of trace norm minimization with the square loss. We also provide an adaptive version that is rank consistent even when the necessary condition for the non adaptive version is not fulfilled.
Title: Generating models for temporal representations
Abstract: We discuss the use of model building for temporal representations. We chose Polish to illustrate our discussion because it has an interesting aspectual system, but the points we wish to make are not language specific. Rather, our goal is to develop theoretical and computational tools for temporal model building tasks in computational semantics. To this end, we present a first-order theory of time and events which is rich enough to capture interesting semantic distinctions, and an algorithm which takes minimal models for first-order theories and systematically attempts to ``perturb'' their temporal component to provide non-minimal, but semantically significant, models.
Title: An efficient reduction of ranking to classification
Abstract: This paper describes an efficient reduction of the learning problem of ranking to binary classification. The reduction guarantees an average pairwise misranking regret of at most that of the binary classifier regret, improving a recent result of Balcan et al which only guarantees a factor of 2. Moreover, our reduction applies to a broader class of ranking loss functions, admits a simpler proof, and the expected running time complexity of our algorithm in terms of number of calls to a classifier or preference function is improved from $\Omega(n^2)$ to $O(n \log n)$. In addition, when the top $k$ ranked elements only are required ($k \ll n$), as in many applications in information extraction or search engines, the time complexity of our algorithm can be further reduced to $O(k \log k + n)$. Our reduction and algorithm are thus practical for realistic applications where the number of points to rank exceeds several thousands. Much of our results also extend beyond the bipartite case previously studied. Our rediction is a randomized one. To complement our result, we also derive lower bounds on any deterministic reduction from binary (preference) classification to ranking, implying that our use of a randomized reduction is essentially necessary for the guarantees we provide.
Title: Updating Probabilities: An Econometric Example
Abstract: We demonstrate how information in the form of observable data and moment constraints are introduced into the method of Maximum relative Entropy (ME). A general example of updating with data and moments is shown. A specific econometric example is solved in detail which can then be used as a template for real world problems. A numerical example is compared to a large deviation solution which illustrates some of the advantages of the ME method.
Title: Cinderella - Comparison of INDEpendent RELative Least-squares Amplitudes
Abstract: The identification of increasingly smaller signal from objects observed with a non-perfect instrument in a noisy environment poses a challenge for a statistically clean data analysis. We want to compute the probability of frequencies determined in various data sets to be related or not, which cannot be answered with a simple comparison of amplitudes. Our method provides a statistical estimator for a given signal with different strengths in a set of observations to be of instrumental origin or to be intrinsic. Based on the spectral significance as an unbiased statistical quantity in frequency analysis, Discrete Fourier Transforms (DFTs) of target and background light curves are comparatively examined. The individual False-Alarm Probabilities are used to deduce conditional probabilities for a peak in a target spectrum to be real in spite of a corresponding peak in the spectrum of a background or of comparison stars. Alternatively, we can compute joint probabilities of frequencies to occur in the DFT spectra of several data sets simultaneously but with different amplitude, which leads to composed spectral significances. These are useful to investigate a star observed in different filters or during several observing runs. The composed spectral significance is a measure for the probability that none of coinciding peaks in the DFT spectra under consideration are due to noise. Cinderella is a mathematical approach to a general statistical problem. Its potential reaches beyond photometry from ground or space: to all cases where a quantitative statistical comparison of periodicities in different data sets is desired. Examples for the composed and the conditional Cinderella mode for different observation setups are presented.
Title: Using Description Logics for Recognising Textual Entailment
Abstract: The aim of this paper is to show how we can handle the Recognising Textual Entailment (RTE) task by using Description Logics (DLs). To do this, we propose a representation of natural language semantics in DLs inspired by existing representations in first-order logic. But our most significant contribution is the definition of two novel inference tasks: A-Box saturation and subgraph detection which are crucial for our approach to RTE.
Title: Probabilistic coherence and proper scoring rules
Abstract: We provide self-contained proof of a theorem relating probabilistic coherence of forecasts to their non-domination by rival forecasts with respect to any proper scoring rule. The theorem appears to be new but is closely related to results achieved by other investigators.
Title: Fuzzy Modeling of Electrical Impedance Tomography Image of the Lungs
Abstract: Electrical Impedance Tomography (EIT) is a functional imaging method that is being developed for bedside use in critical care medicine. Aiming at improving the chest anatomical resolution of EIT images we developed a fuzzy model based on EIT high temporal resolution and the functional information contained in the pulmonary perfusion and ventilation signals. EIT data from an experimental animal model were collected during normal ventilation and apnea while an injection of hypertonic saline was used as a reference . The fuzzy model was elaborated in three parts: a modeling of the heart, a pulmonary map from ventilation images and, a pulmonary map from perfusion images. Image segmentation was performed using a threshold method and a ventilation/perfusion map was generated. EIT images treated by the fuzzy model were compared with the hypertonic saline injection method and CT-scan images, presenting good results in both qualitative (the image obtained by the model was very similar to that of the CT-scan) and quantitative (the ROC curve provided an area equal to 0.93) point of view. Undoubtedly, these results represent an important step in the EIT images area, since they open the possibility of developing EIT-based bedside clinical methods, which are not available nowadays. These achievements could serve as the base to develop EIT diagnosis system for some life-threatening diseases commonly found in critical care medicine.
Title: Nontraditional Scoring of C-tests
Abstract: In C-tests the hypothesis of items local independence is violated, which doesn't permit to consider them as real tests. It is suggested to determine the distances between separate C-test items (blanks) and to combine items into clusters. Weights, inversely proportional to the number of items in corresponding clusters, are assigned to items. As a result, the C-test structure becomes similar to the structure of classical tests, without violation of local independence hypothesis.
Title: Modelling the effects of air pollution on health using Bayesian Dynamic Generalised Linear Models
Abstract: The relationship between short-term exposure to air pollution and mortality or morbidity has been the subject of much recent research, in which the standard method of analysis uses Poisson linear or additive models. In this paper we use a Bayesian dynamic generalised linear model (DGLM) to estimate this relationship, which allows the standard linear or additive model to be extended in two ways: (i) the long-term trend and temporal correlation present in the health data can be modelled by an autoregressive process rather than a smooth function of calendar time; (ii) the effects of air pollution are allowed to evolve over time. The efficacy of these two extensions are investigated by applying a series of dynamic and non-dynamic models to air pollution and mortality data from Greater London. A Bayesian approach is taken throughout, and a Markov chain monte carlo simulation algorithm is presented for inference. An alternative likelihood based analysis is also presented, in order to allow a direct comparison with the only previous analysis of air pollution and health data using a DGLM.
Title: Using Synchronic and Diachronic Relations for Summarizing Multiple Documents Describing Evolving Events
Abstract: In this paper we present a fresh look at the problem of summarizing evolving events from multiple sources. After a discussion concerning the nature of evolving events we introduce a distinction between linearly and non-linearly evolving events. We present then a general methodology for the automatic creation of summaries from evolving events. At its heart lie the notions of Synchronic and Diachronic cross-document Relations (SDRs), whose aim is the identification of similarities and differences between sources, from a synchronical and diachronical perspective. SDRs do not connect documents or textual elements found therein, but structures one might call messages. Applying this methodology will yield a set of messages and relations, SDRs, connecting them, that is a graph which we call grid. We will show how such a grid can be considered as the starting point of a Natural Language Generation System. The methodology is evaluated in two case-studies, one for linearly evolving events (descriptions of football matches) and another one for non-linearly evolving events (terrorist incidents involving hostages). In both cases we evaluate the results produced by our computational systems.
Title: Stationary probability density of stochastic search processes in global optimization
Abstract: A method for the construction of approximate analytical expressions for the stationary marginal densities of general stochastic search processes is proposed. By the marginal densities, regions of the search space that with high probability contain the global optima can be readily defined. The density estimation procedure involves a controlled number of linear operations, with a computational cost per iteration that grows linearly with problem size.
Title: Bayesian Online Changepoint Detection
Abstract: Changepoints are abrupt variations in the generative parameters of a data sequence. Online detection of changepoints is useful in modelling and prediction of time series in application areas such as finance, biometrics, and robotics. While frequentist methods have yielded online filtering and prediction techniques, most Bayesian papers have focused on the retrospective segmentation problem. Here we examine the case where the model parameters before and after the changepoint are independent and we derive an online algorithm for exact inference of the most recent changepoint. We compute the probability distribution of the length of the current ``run,'' or time since the last changepoint, using a simple message-passing algorithm. Our implementation is highly modular so that the algorithm may be applied to a variety of types of data. We illustrate this modularity by demonstrating the algorithm on three different real-world data sets.
Title: Retrospective Markov chain Monte Carlo methods for Dirichlet process hierarchical model
Abstract: Inference for Dirichlet process hierarchical models is typically performed using Markov chain Monte Carlo methods, which can be roughly categorised into marginal and conditional methods. The former integrate out analytically the infinite-dimensional component of the hierarchical model and sample from the marginal distribution of the remaining variables using the Gibbs sampler. Conditional methods impute the Dirichlet process and update it as a component of the Gibbs sampler. Since this requires imputation of an infinite-dimensional process, implementation of the conditional method has relied on finite approximations. In this paper we show how to avoid such approximations by designing two novel Markov chain Monte Carlo algorithms which sample from the exact posterior distribution of quantities of interest. The approximations are avoided by the new technique of retrospective sampling. We also show how the algorithms can obtain samples from functionals of the Dirichlet process. The marginal and the conditional methods are compared and a careful simulation study is included, which involves a non-conjugate model, different datasets and prior specifications.
Title: Analyzing covert social network foundation behind terrorism disaster
Abstract: This paper addresses a method to analyze the covert social network foundation hidden behind the terrorism disaster. It is to solve a node discovery problem, which means to discover a node, which functions relevantly in a social network, but escaped from monitoring on the presence and mutual relationship of nodes. The method aims at integrating the expert investigator's prior understanding, insight on the terrorists' social network nature derived from the complex graph theory, and computational data processing. The social network responsible for the 9/11 attack in 2001 is used to execute simulation experiment to evaluate the performance of the method.
Title: Stability of the Gibbs Sampler for Bayesian Hierarchical Models
Abstract: We characterise the convergence of the Gibbs sampler which samples from the joint posterior distribution of parameters and missing data in hierarchical linear models with arbitrary symmetric error distributions. We show that the convergence can be uniform, geometric or sub-geometric depending on the relative tail behaviour of the error distributions, and on the parametrisation chosen. Our theory is applied to characterise the convergence of the Gibbs sampler on latent Gaussian process models. We indicate how the theoretical framework we introduce will be useful in analyzing more complex models.
Title: Adaptive Importance Sampling in General Mixture Classes
Abstract: In this paper, we propose an adaptive algorithm that iteratively updates both the weights and component parameters of a mixture importance sampling density so as to optimise the importance sampling performances, as measured by an entropy criterion. The method is shown to be applicable to a wide class of importance sampling densities, which includes in particular mixtures of multivariate Student t distributions. The performances of the proposed scheme are studied on both artificial and real examples, highlighting in particular the benefit of a novel Rao-Blackwellisation device which can be easily incorporated in the updating scheme.
Title: Particle Filters for Partially Observed Diffusions
Abstract: In this paper we introduce a novel particle filter scheme for a class of partially-observed multivariate diffusions. %continuous-time dynamic models where the %signal is given by a multivariate diffusion process. We consider a variety of observation schemes, including diffusion observed with error, observation of a subset of the components of the multivariate diffusion and arrival times of a Poisson process whose intensity is a known function of the diffusion (Cox process). Unlike currently available methods, our particle filters do not require approximations of the transition and/or the observation density using time-discretisations. Instead, they build on recent methodology for the exact simulation of the diffusion process and the unbiased estimation of the transition density as described in . %In particular, w We introduce the Generalised Poisson Estimator, which generalises the Poisson Estimator of . %Thus, our filters avoid the systematic biases caused by %time-discretisations and they have significant computational %advantages over alternative continuous-time filters. These %advantages are supported theoretically by a A central limit theorem is given for our particle filter scheme.
Title: Causality and Association: The Statistical and Legal Approaches
Abstract: This paper discusses different needs and approaches to establishing ``causation'' that are relevant in legal cases involving statistical input based on epidemiological (or more generally observational or population-based) information. We distinguish between three versions of ``cause'': the first involves negligence in providing or allowing exposure, the second involves ``cause'' as it is shown through a scientifically proved increased risk of an outcome from the exposure in a population, and the third considers ``cause'' as it might apply to an individual plaintiff based on the first two. The population-oriented ``cause'' is that commonly addressed by statisticians, and we propose a variation on the Bradford Hill approach to testing such causality in an observational framework, and discuss how such a systematic series of tests might be considered in a legal context. We review some current legal approaches to using probabilistic statements, and link these with the scientific methodology as developed here. In particular, we provide an approach both to the idea of individual outcomes being caused on a balance of probabilities, and to the idea of material contribution to such outcomes. Statistical terminology and legal usage of terms such as ``proof on the balance of probabilities'' or ``causation'' can easily become confused, largely due to similar language describing dissimilar concepts; we conclude, however, that a careful analysis can identify and separate those areas in which a legal decision alone is required and those areas in which scientific approaches are useful.
Title: The predictability of letters in written english
Abstract: We show that the predictability of letters in written English texts depends strongly on their position in the word. The first letters are usually the least easy to predict. This agrees with the intuitive notion that words are well defined subunits in written languages, with much weaker correlations across these units than within them. It implies that the average entropy of a letter deep inside a word is roughly 4 times smaller than the entropy of the first letter.
Title: Bayesian treed Gaussian process models with an application to computer modeling
Abstract: Motivated by a computer experiment for the design of a rocket booster, this paper explores nonstationary modeling methodologies that couple stationary Gaussian processes with treed partitioning. Partitioning is a simple but effective method for dealing with nonstationarity. The methodological developments and statistical computing details which make this approach efficient are described in detail. In addition to providing an analysis of the rocket booster simulator, our approach is demonstrated to be effective in other arenas.
Title: A Family of Generalized Beta Distributions for Income
Abstract: The mathematical properties of a family of generalized beta distribution, including beta-normal, skewed-t, log-F, beta-exponential, beta-Weibull distributions have recently been studied in several publications. This paper applies these distributions to the modeling of the size distribution of income and computes the maximum likelihood estimation estimates of parameters. Their performances are compared to the widely used generalized beta distributions of the first and second types in terms of measures of goodness of fit.
Title: The Use of Unlabeled Data in Predictive Modeling
Abstract: The incorporation of unlabeled data in regression and classification analysis is an increasing focus of the applied statistics and machine learning literatures, with a number of recent examples demonstrating the potential for unlabeled data to contribute to improved predictive accuracy. The statistical basis for this semisupervised analysis does not appear to have been well delineated; as a result, the underlying theory and rationale may be underappreciated, especially by nonstatisticians. There is also room for statisticians to become more fully engaged in the vigorous research in this important area of intersection of the statistical and computer sciences. Much of the theoretical work in the literature has focused, for example, on geometric and structural properties of the unlabeled data in the context of particular algorithms, rather than probabilistic and statistical questions. This paper overviews the fundamental statistical foundations for predictive modeling and the general questions associated with unlabeled data, highlighting the relevance of venerable concepts of sampling design and prior specification. This theory, illustrated with a series of central illustrative examples and two substantial real data analyses, shows precisely when, why and how unlabeled data matter.
Title: Statistical and Clinical Aspects of Hospital Outcomes Profiling
Abstract: Hospital profiling involves a comparison of a health care provider's structure, processes of care, or outcomes to a standard, often in the form of a report card. Given the ubiquity of report cards and similar consumer ratings in contemporary American culture, it is notable that these are a relatively recent phenomenon in health care. Prior to the 1986 release of Medicare hospital outcome data, little such information was publicly available. We review the historical evolution of hospital profiling with special emphasis on outcomes; present a detailed history of cardiac surgery report cards, the paradigm for modern provider profiling; discuss the potential unintended negative consequences of public report cards; and describe various statistical methodologies for quantifying the relative performance of cardiac surgery programs. Outstanding statistical issues are also described.
Title: A Conversation with Shoutir Kishore Chatterjee
Abstract: Shoutir Kishore Chatterjee was born in Ranchi, a small hill station in India, on November 6, 1934. He received his B.Sc. in statistics from the Presidency College, Calcutta, in 1954, and M.Sc. and Ph.D. degrees in statistics from the University of Calcutta in 1956 and 1962, respectively. He was appointed a lecturer in the Department of Statistics, University of Calcutta, in 1960 and was a member of its faculty until his retirement as a professor in 1997. Indeed, from the 1970s he steered the teaching and research activities of the department for the next three decades. Professor Chatterjee was the National Lecturer in Statistics (1985--1986) of the University Grants Commission, India, the President of the Section of Statistics of the Indian Science Congress (1989) and an Emeritus Scientist (1997--2000) of the Council of Scientific and Industrial Research, India. Professor Chatterjee, affectionately known as SKC to his students and admirers, is a truly exceptional person who embodies the spirit of eternal India. He firmly believes that ``fulfillment in man's life does not come from amassing a lot of money, after the threshold of what is required for achieving a decent living is crossed. It does not come even from peer recognition for intellectual achievements. Of course, one has to work and toil a lot before one realizes these facts.''