text
stringlengths 0
4.09k
|
|---|
Title: Construction of weakly CUD sequences for MCMC sampling
|
Abstract: In Markov chain Monte Carlo (MCMC) sampling considerable thought goes into constructing random transitions. But those transitions are almost always driven by a simulated IID sequence. Recently it has been shown that replacing an IID sequence by a weakly completely uniformly distributed (WCUD) sequence leads to consistent estimation in finite state spaces. Unfortunately, few WCUD sequences are known. This paper gives general methods for proving that a sequence is WCUD, shows that some specific sequences are WCUD, and shows that certain operations on WCUD sequences yield new WCUD sequences. A numerical example on a 42 dimensional continuous Gibbs sampler found that some WCUD inputs sequences produced variance reductions ranging from tens to hundreds for posterior means of the parameters, compared to IID inputs.
|
Title: A new method for fast computing unbiased estimators of cumulants
|
Abstract: We propose new algorithms for generating $k$-statistics, multivariate $k$-statistics, polykays and multivariate polykays. The resulting computational times are very fast compared with procedures existing in the literature. Such speeding up is obtained by means of a symbolic method arising from the classical umbral calculus. The classical umbral calculus is a light syntax that involves only elementary rules to managing sequences of numbers or polynomials. The cornerstone of the procedures here introduced is the connection between cumulants of a random variable and a suitable compound Poisson random variable. Such a connection holds also for multivariate random variables.
|
Title: Message-passing for Maximum Weight Independent Set
|
Abstract: We investigate the use of message-passing algorithms for the problem of finding the max-weight independent set (MWIS) in a graph. First, we study the performance of the classical loopy max-product belief propagation. We show that each fixed point estimate of max-product can be mapped in a natural way to an extreme point of the LP polytope associated with the MWIS problem. However, this extreme point may not be the one that maximizes the value of node weights; the particular extreme point at final convergence depends on the initialization of max-product. We then show that if max-product is started from the natural initialization of uninformative messages, it always solves the correct LP -- if it converges. This result is obtained via a direct analysis of the iterative algorithm, and cannot be obtained by looking only at fixed points. The tightness of the LP relaxation is thus necessary for max-product optimality, but it is not sufficient. Motivated by this observation, we show that a simple modification of max-product becomes gradient descent on (a convexified version of) the dual of the LP, and converges to the dual optimum. We also develop a message-passing algorithm that recovers the primal MWIS solution from the output of the descent algorithm. We show that the MWIS estimate obtained using these two algorithms in conjunction is correct when the graph is bipartite and the MWIS is unique. Finally, we show that any problem of MAP estimation for probability distributions over finite domains can be reduced to an MWIS problem. We believe this reduction will yield new insights and algorithms for MAP estimation.
|
Title: I'm sorry to say, but your understanding of image processing fundamentals is absolutely wrong
|
Abstract: The ongoing discussion whether modern vision systems have to be viewed as visually-enabled cognitive systems or cognitively-enabled vision systems is groundless, because perceptual and cognitive faculties of vision are separate components of human (and consequently, artificial) information processing system modeling.
|
Title: Mathematical Structure of Quantum Decision Theory
|
Abstract: One of the most complex systems is the human brain whose formalized functioning is characterized by decision theory. We present a "Quantum Decision Theory" of decision making, based on the mathematical theory of separable Hilbert spaces. This mathematical structure captures the effect of superposition of composite prospects, including many incorporated intentions, which allows us to explain a variety of interesting fallacies and anomalies that have been reported to particularize the decision making of real human beings. The theory describes entangled decision making, non-commutativity of subsequent decisions, and intention interference of composite prospects. We demonstrate how the violation of the Savage's sure-thing principle (disjunction effect) can be explained as a result of the interference of intentions, when making decisions under uncertainty. The conjunction fallacy is also explained by the presence of the interference terms. We demonstrate that all known anomalies and paradoxes, documented in the context of classical decision theory, are reducible to just a few mathematical archetypes, all of which finding straightforward explanations in the frame of the developed quantum approach.
|
Title: A 8 bits Pipeline Analog to Digital Converter Design for High Speed Camera Application
|
Abstract: - This paper describes a pipeline analog-to-digital converter is implemented for high speed camera. In the pipeline ADC design, prime factor is designing operational amplifier with high gain so ADC have been high speed. The other advantage of pipeline is simple on concept, easy to implement in layout and have flexibility to increase speed. We made design and simulation using Mentor Graphics Software with 0.6 \mu m CMOS technology with a total power dissipation of 75.47 mW. Circuit techniques used include a precise comparator, operational amplifier and clock management. A switched capacitor is used to sample and multiplying at each stage. Simulation a worst case DNL and INL of 0.75 LSB. The design operates at 5 V dc. The ADC achieves a SNDR of 44.86 dB. keywords: pipeline, switched capacitor, clock management
|
Title: Design and Implementation a 8 bits Pipeline Analog to Digital Converter in the Technology 0.6 \mu m CMOS Process
|
Abstract: This paper describes a 8 bits, 20 Msamples/s pipeline analog-to-digital converter implemented in 0.6 \mu m CMOS technology with a total power dissipation of 75.47 mW. Circuit techniques used include a precise comparator, operational amplifier and clock management. A switched capacitor is used to sample and multiplying at each stage. Simulation a worst case DNL and INL of 0.75 LSB. The design operate at 5 V dc. The ADC achieves a SNDR of 44.86 dB. keywords : pipeline, switched capacitor, clock management
|
Title: Logics for the Relational Syllogistic
|
Abstract: The Aristotelian syllogistic cannot account for the validity of many inferences involving relational facts. In this paper, we investigate the prospects for providing a relational syllogistic. We identify several fragments based on (a) whether negation is permitted on all nouns, including those in the subject of a sentence; and (b) whether the subject noun phrase may contain a relative clause. The logics we present are extensions of the classical syllogistic, and we pay special attention to the question of whether reductio ad absurdum is needed. Thus our main goal is to derive results on the existence (or non-existence) of syllogistic proof systems for relational fragments. We also determine the computational complexity of all our fragments.
|
Title: Microarrays, Empirical Bayes and the Two-Groups Model
|
Abstract: The classic frequentist theory of hypothesis testing developed by Neyman, Pearson and Fisher has a claim to being the twentieth century's most influential piece of applied mathematics. Something new is happening in the twenty-first century: high-throughput devices, such as microarrays, routinely require simultaneous hypothesis tests for thousands of individual cases, not at all what the classical theory had in mind. In these situations empirical Bayes information begins to force itself upon frequentists and Bayesians alike. The two-groups model is a simple Bayesian construction that facilitates empirical Bayes analysis. This article concerns the interplay of Bayesian and frequentist ideas in the two-groups setting, with particular attention focused on Benjamini and Hochberg's False Discovery Rate method. Topics include the choice and meaning of the null hypothesis in large-scale testing situations, power considerations, the limitations of permutation methods, significance testing for groups of cases (such as pathways in microarray studies), correlation effects, multiple confidence intervals and Bayesian competitors to the two-groups model.
|
Title: Comment: Microarrays, Empirical Bayes and the Two-Groups Model
|
Abstract: Comment on ``Microarrays, Empirical Bayes and the Two-Groups Model'' [arXiv:0808.0572]
|
Title: Comment: Microarrays, Empirical Bayes and the Two-Group Model
|
Abstract: Comment on ``Microarrays, Empirical Bayes and the Two-Group Model'' [arXiv:0808.0572]
|
Title: Comment: Microarrays, Empirical Bayes and the Two-Groups Model
|
Abstract: Brad Efron's paper [arXiv:0808.0572] has inspired a return to the ideas behind Bayes, frequency and empirical Bayes. The latter preferably would not be limited to exchangeable models for the data and hyperparameters. Parallels are revealed between microarray analyses and profiling of hospitals, with advances suggesting more decision modeling for gene identification also. Then good multilevel and empirical Bayes models for random effects should be sought when regression toward the mean is anticipated.
|
Title: Comment: Microarrays, Empirical Bayes and the Two-Groups Model
|
Abstract: Comment on ``Microarrays, Empirical Bayes and the Two-Groups Model'' [arXiv:0808.0572]
|
Title: Rejoinder: Microarrays, Empirical Bayes and the Two-Groups Model
|
Abstract: Rejoinder to ``Microarrays, Empirical Bayes and the Two-Groups Model'' [arXiv:0808.0572]
|
Title: The 2005 Neyman Lecture: Dynamic Indeterminism in Science
|
Abstract: Jerzy Neyman's life history and some of his contributions to applied statistics are reviewed. In a 1960 article he wrote: ``Currently in the period of dynamic indeterminism in science, there is hardly a serious piece of research which, if treated realistically, does not involve operations on stochastic processes. The time has arrived for the theory of stochastic processes to become an item of usual equipment of every applied statistician.'' The emphasis in this article is on stochastic processes and on stochastic process data analysis. A number of data sets and corresponding substantive questions are addressed. The data sets concern sardine depletion, blowfly dynamics, weather modification, elk movement and seal journeying. Three of the examples are from Neyman's work and four from the author's joint work with collaborators.
|
Title: Comment: The 2005 Neyman Lecture: Dynamic Indeterminism in Science
|
Abstract: Comment on ``The 2005 Neyman Lecture: Dynamic Indeterminism in Science'' [arXiv:0808.0620]
|
Title: Comment: The 2005 Neyman Lecture: Dynamic Indeterminism in Science
|
Abstract: Comment on ``The 2005 Neyman Lecture: Dynamic Indeterminism in Science'' [arXiv:0808.0620]
|
Title: Rejoinder: The 2005 Neyman Lecture: Dynamic Indeterminism in Science
|
Abstract: Rejoinder to ``The 2005 Neyman Lecture: Dynamic Indeterminism in Science'' [arXiv:0808.0620]
|
Title: Verbal Autopsy Methods with Multiple Causes of Death
|
Abstract: Verbal autopsy procedures are widely used for estimating cause-specific mortality in areas without medical death certification. Data on symptoms reported by caregivers along with the cause of death are collected from a medical facility, and the cause-of-death distribution is estimated in the population where only symptom data are available. Current approaches analyze only one cause at a time, involve assumptions judged difficult or impossible to satisfy, and require expensive, time-consuming, or unreliable physician reviews, expert algorithms, or parametric statistical models. By generalizing current approaches to analyze multiple causes, we show how most of the difficult assumptions underlying existing methods can be dropped. These generalizations also make physician review, expert algorithms and parametric statistical assumptions unnecessary. With theoretical results, and empirical analyses in data from China and Tanzania, we illustrate the accuracy of this approach. While no method of analyzing verbal autopsy data, including the more computationally intensive approach offered here, can give accurate estimates in all circumstances, the procedure offered is conceptually simpler, less expensive, more general, as or more replicable, and easier to use in practice than existing approaches. We also show how our focus on estimating aggregate proportions, which are the quantities of primary interest in verbal autopsy studies, may also greatly reduce the assumptions necessary for, and thus improve the performance of, many individual classifiers in this and other areas. As a companion to this paper, we also offer easy-to-use software that implements the methods discussed herein.
|
Title: High-Breakdown Robust Multivariate Methods
|
Abstract: When applying a statistical method in practice it often occurs that some observations deviate from the usual assumptions. However, many classical methods are sensitive to outliers. The goal of robust statistics is to develop methods that are robust against the possibility that one or several unannounced outliers may occur anywhere in the data. These methods then allow to detect outlying observations by their residuals from a robust fit. We focus on high-breakdown methods, which can deal with a substantial fraction of outliers in the data. We give an overview of recent high-breakdown robust methods for multivariate settings such as covariance estimation, multiple and multivariate regression, discriminant analysis, principal components and multivariate calibration.
|
Title: Support union recovery in high-dimensional multivariate regression
|
Abstract: In multivariate regression, a $K$-dimensional response vector is regressed upon a common set of $p$ covariates, with a matrix $B^*\in^p\times K$ of regression coefficients. We study the behavior of the multivariate group Lasso, in which block regularization based on the $\ell_1/\ell_2$ norm is used for support union recovery, or recovery of the set of $s$ rows for which $B^*$ is nonzero. Under high-dimensional scaling, we show that the multivariate group Lasso exhibits a threshold for the recovery of the exact row pattern with high probability over the random design and noise that is specified by the sample complexity parameter $\theta(n,p,s):=n/[2\psi(B^*)\log(p-s)]$. Here $n$ is the sample size, and $\psi(B^*)$ is a sparsity-overlap function measuring a combination of the sparsities and overlaps of the $K$-regression coefficient vectors that constitute the model. We prove that the multivariate group Lasso succeeds for problem sequences $(n,p,s)$ such that $\theta(n,p,s)$ exceeds a critical level $\theta_u$, and fails for sequences such that $\theta(n,p,s)$ lies below a critical level $\theta_\ell$. For the special case of the standard Gaussian ensemble, we show that $\theta_\ell=\theta_u$ so that the characterization is sharp. The sparsity-overlap function $\psi(B^*)$ reveals that, if the design is uncorrelated on the active rows, $\ell_1/\ell_2$ regularization for multivariate regression never harms performance relative to an ordinary Lasso approach and can yield substantial improvements in sample complexity (up to a factor of $K$) when the coefficient vectors are suitably orthogonal. For more general designs, it is possible for the ordinary Lasso to outperform the multivariate group Lasso. We complement our analysis with simulations that demonstrate the sharpness of our theoretical results, even for relatively small problems.
|
Title: A Conversation with Peter Huber
|
Abstract: Peter J. Huber was born on March 25, 1934, in Wohlen, a small town in the Swiss countryside. He obtained a diploma in mathematics in 1958 and a Ph.D. in mathematics in 1961, both from ETH Zurich. His thesis was in pure mathematics, but he then decided to go into statistics. He spent 1961--1963 as a postdoc at the statistics department in Berkeley where he wrote his first and most famous paper on robust statistics, ``Robust Estimation of a Location Parameter.'' After a position as a visiting professor at Cornell University, he became a full professor at ETH Zurich. He worked at ETH until 1978, interspersed by visiting positions at Cornell, Yale, Princeton and Harvard. After leaving ETH, he held professor positions at Harvard University 1978--1988, at MIT 1988--1992, and finally at the University of Bayreuth from 1992 until his retirement in 1999. He now lives in Klosters, a village in the Grisons in the Swiss Alps. Peter Huber has published four books and over 70 papers on statistics and data analysis. In addition, he has written more than a dozen papers and two books on Babylonian mathematics, astronomy and history. In 1972, he delivered the Wald lectures. He is a fellow of the IMS, of the American Association for the Advancement of Science, and of the American Academy of Arts and Sciences. In 1988 he received a Humboldt Award and in 1994 an honorary doctorate from the University of Neuch\^atel. In addition to his fundamental results in robust statistics, Peter Huber made important contributions to computational statistics, strategies in data analysis, and applications of statistics in fields such as crystallography, EEGs, and human growth curves.
|
Title: LLE with low-dimensional neighborhood representation
|
Abstract: The local linear embedding algorithm (LLE) is a non-linear dimension-reducing technique, widely used due to its computational simplicity and intuitive approach. LLE first linearly reconstructs each input point from its nearest neighbors and then preserves these neighborhood relations in the low-dimensional embedding. We show that the reconstruction weights computed by LLE capture the high-dimensional structure of the neighborhoods, and not the low-dimensional manifold structure. Consequently, the weight vectors are highly sensitive to noise. Moreover, this causes LLE to converge to a linear projection of the input, as opposed to its non-linear embedding goal. To overcome both of these problems, we propose to compute the weight vectors using a low-dimensional neighborhood representation. We prove theoretically that this straightforward and computationally simple modification of LLE reduces LLE's sensitivity to noise. This modification also removes the need for regularization when the number of neighbors is larger than the dimension of the input. We present numerical examples demonstrating both the perturbation and linear projection problems, and the improved outputs using the low-dimensional neighborhood representation.
|
Title: The Early Statistical Years: 1947--1967 A Conversation with Howard Raiffa
|
Abstract: Howard Raiffa earned his bachelor's degree in mathematics, his master's degree in statistics and his Ph.D. in mathematics at the University of Michigan. Since 1957, Raiffa has been a member of the faculty at Harvard University, where he is now the Frank P. Ramsey Chair in Managerial Economics (Emeritus) in the Graduate School of Business Administration and the Kennedy School of Government. A pioneer in the creation of the field known as decision analysis, his research interests span statistical decision theory, game theory, behavioral decision theory, risk analysis and negotiation analysis. Raiffa has supervised more than 90 doctoral dissertations and written 11 books. His new book is Negotiation Analysis: The Science and Art of Collaborative Decision Making. Another book, Smart Choices, co-authored with his former doctoral students John Hammond and Ralph Keeney, was the CPR (formerly known as the Center for Public Resources) Institute for Dispute Resolution Book of the Year in 1998. Raiffa helped to create the International Institute for Applied Systems Analysis and he later became its first Director, serving in that capacity from 1972 to 1975. His many honors and awards include the Distinguished Contribution Award from the Society of Risk Analysis; the Frank P. Ramsey Medal for outstanding contributions to the field of decision analysis from the Operations Research Society of America; and the Melamed Prize from the University of Chicago Business School for The Art and Science of Negotiation. He earned a Gold Medal from the International Association for Conflict Management and a Lifetime Achievement Award from the CPR Institute for Dispute Resolution. He holds honorary doctor's degrees from Carnegie Mellon University, the University of Michigan, Northwestern University, Ben Gurion University of the Negev and Harvard University. The latter was awarded in 2002.
|
Title: Mutual information is copula entropy
|
Abstract: We prove that mutual information is actually negative copula entropy, based on which a method for mutual information estimation is proposed.
|
Title: Text Modeling using Unsupervised Topic Models and Concept Hierarchies
|
Abstract: Statistical topic models provide a general data-driven framework for automated discovery of high-level knowledge from large collections of text documents. While topic models can potentially discover a broad range of themes in a data set, the interpretability of the learned topics is not always ideal. Human-defined concepts, on the other hand, tend to be semantically richer due to careful selection of words to define concepts but they tend not to cover the themes in a data set exhaustively. In this paper, we propose a probabilistic framework to combine a hierarchy of human-defined semantic concepts with statistical topic models to seek the best of both worlds. Experimental results using two different sources of concept hierarchies and two collections of text documents indicate that this combination leads to systematic improvements in the quality of the associated language models as well as enabling new techniques for inferring and visualizing the semantics of a document.
|
Title: Happy places or happy people? A multi-level modelling approach to the analysis of happiness and well-being
|
Abstract: This paper aims to enhance our understanding of substantive questions regarding self-reported happiness and well-being through the specification and use of multi-level models. To date, there have been numerous quantitative research studies of the happiness of individuals, based on single-level regression models, where typically a happiness index is related to a set of explanatory variables. There are also several single-level studies comparing aggregate happiness levels between countries. Nevertheless, there have been very few studies that attempt to simultaneously take into account variations in happiness and well-being at several different levels, such as individual, household, and area. Here, multilevel models are used with data from the British Household Panel Survey to assess the nature and extent of variations in happiness and well-being to determine the relative importance of the area (district, region), household and individual characteristics on these outcomes. Moreover, having taken into account the characteristics at these different levels in the multilevel models, the paper shows how it is possible to identify any areas that are associated with especially positive or negative feelings of happiness and well-being.
|
Title: Verified Null-Move Pruning
|
Abstract: In this article we review standard null-move pruning and introduce our extended version of it, which we call verified null-move pruning. In verified null-move pruning, whenever the shallow null-move search indicates a fail-high, instead of cutting off the search from the current node, the search is continued with reduced depth. Our experiments with verified null-move pruning show that on average, it constructs a smaller search tree with greater tactical strength in comparison to standard null-move pruning. Moreover, unlike standard null-move pruning, which fails badly in zugzwang positions, verified null-move pruning manages to detect most zugzwangs and in such cases conducts a re-search to obtain the correct result. In addition, verified null-move pruning is very easy to implement, and any standard null-move pruning program can use verified null-move pruning by modifying only a few lines of code.
|
Title: Relations among conditional probabilities
|
Abstract: We describe a Groebner basis of relations among conditional probabilities in a discrete probability space, with any set of conditioned-upon events. They may be specialized to the partially-observed random variable case, the purely conditional case, and other special cases. We also investigate the connection to generalized permutohedra and describe a conditional probability simplex.
|
Title: Commonsense Knowledge, Ontology and Ordinary Language
|
Abstract: Over two decades ago a "quite revolution" overwhelmingly replaced knowledgebased approaches in natural language processing (NLP) by quantitative (e.g., statistical, corpus-based, machine learning) methods. Although it is our firm belief that purely quantitative approaches cannot be the only paradigm for NLP, dissatisfaction with purely engineering approaches to the construction of large knowledge bases for NLP are somewhat justified. In this paper we hope to demonstrate that both trends are partly misguided and that the time has come to enrich logical semantics with an ontological structure that reflects our commonsense view of the world and the way we talk about in ordinary language. In this paper it will be demonstrated that assuming such an ontological structure a number of challenges in the semantics of natural language (e.g., metonymy, intensionality, copredication, nominal compounds, etc.) can be properly and uniformly addressed.
|
Title: On the incidence-prevalence relation and length-biased sampling
|
Abstract: For many diseases, logistic and other constraints often render large incidence studies difficult, if not impossible, to carry out. This becomes a drawback, particularly when a new incidence study is needed each time the disease incidence rate is investigated in a different population. However, by carrying out a prevalent cohort study with follow-up it is possible to estimate the incidence rate if it is constant. In this paper we derive the maximum likelihood estimator (MLE) of the overall incidence rate, $\lambda$, as well as age-specific incidence rates, by exploiting the well known epidemiologic relationship, prevalence = incidence $\times$ mean duration ($P = \lambda \times \mu$). We establish the asymptotic distributions of the MLEs, provide approximate confidence intervals for the parameters, and point out that the MLE of $\lambda$ is asymptotically most efficient. Moreover, the MLE of $\lambda$ is the natural estimator obtained by substituting the marginal maximum likelihood estimators for P and $\mu$, respectively, in the expression $P = \lambda \times \mu$. Our work is related to that of Keiding (1991, 2006), who, using a Markov process model, proposed estimators for the incidence rate from a prevalent cohort study follow-up, under three different scenarios. However, each scenario requires assumptions that are both disease specific and depend on the availability of epidemiologic data at the population level. With follow-up, we are able to remove these restrictions, and our results apply in a wide range of circumstances. We apply our methods to data collected as part of the Canadian Study of Health and Ageing to estimate the incidence rate of dementia amongst elderly Canadians.
|
Title: Self-Motions of General 3-RPR Planar Parallel Robots
|
Abstract: This paper studies the kinematic geometry of general 3-RPR planar parallel robots with actuated base joints. These robots, while largely overlooked, have simple direct kinematics and large singularity-free workspace. Furthermore, their kinematic geometry is the same as that of a newly developed parallel robot with SCARA-type motions. Starting from the direct and inverse kinematic model, the expressions for the singularity loci of 3-RPR planar parallel robots are determined. Then, the global behaviour at all singularities is geometrically described by studying the degeneracy of the direct kinematic model. Special cases of self-motions are then examined and the degree of freedom gained in such special configurations is kinematically interpreted. Finally, a practical example is discussed and experimental validations performed on an actual robot prototype are presented.
|
Title: New Tests of Spatial Segregation Based on Nearest Neighbor Contingency Tables
|
Abstract: The spatial clustering of points from two or more classes (or species) has important implications in many fields and may cause the spatial patterns of segregation and association, which are two major types of spatial interaction between the classes. The null patterns we consider are random labeling (RL) and complete spatial randomness (CSR) of points from two or more classes, which is called CSR independence. The segregation and association patterns can be studied using a nearest neighbor contingency table (NNCT) which is constructed using the frequencies of nearest neighbor (NN) types in a contingency table. Among NNCT-tests Pielou's test is liberal the null pattern but Dixon's test has the desired significance level under the RL pattern. We propose three new multivariate clustering tests based on NNCTs. We compare the finite sample performance of these new tests with Pielou's and Dixon's tests and Cuzick & Edward's k-NN tests in terms of empirical size under the null cases and empirical power under various segregation and association alternatives and provide guidelines for using the tests in practice. We demonstrate that the newly proposed NNCT-tests perform relatively well compared to their competitors and illustrate the tests using three example data sets. Furthermore, we compare the NNCT-tests with the second-order methods using these examples.
|
Title: Markov switching models: an application to roadway safety
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.