text
stringlengths
0
4.09k
Title: Computational Intelligence Characterization Method of Semiconductor Device
Abstract: Characterization of semiconductor devices is used to gather as much data about the device as possible to determine weaknesses in design or trends in the manufacturing process. In this paper, we propose a novel multiple trip point characterization concept to overcome the constraint of single trip point concept in device characterization phase. In addition, we use computational intelligence techniques (e.g. neural network, fuzzy and genetic algorithm) to further manipulate these sets of multiple trip point values and tests based on semiconductor test equipments, Our experimental results demonstrate an excellent design parameter variation analysis in device characterization phase, as well as detection of a set of worst case tests that can provoke the worst case variation, while traditional approach was not capable of detecting them.
Title: A Conversation with Dorothy Gilford
Abstract: In 1946, Public Law 588 of the 79th Congress established the Office of Naval Research (ONR). Its mission was to plan, foster and encourage scientific research in support of Naval problems. The establishment of ONR predates the National Science Foundation and initiated the refocusing of scientific infrastructure in the United States following World War II. At the time, ONR was the only source for federal support of basic research in the United States. Dorothy Gilford was one of the first Heads of the Probability and Statistics program at the Office of Naval Research (1955 to 1962), and she went on to serve as Director of the Mathematical Sciences Division (1962 to 1968). During her time at ONR, Dorothy influenced many areas of statistics and mathematics and was ahead of her time in promoting interdisciplinary projects. Dorothy continued her career at the National Center for Education Statistics (1969 to 1974). She was active in starting international comparisons of education outcomes in different countries, which has influenced educational policy in the United States. Dorothy went on to serve in many capacities at the National Academy of Sciences, including Director of Human Resources Studies (1975 to 1978), Senior Statistician on the Committee on National Statistics (1978 to 1988) and Director of the Board on International Comparative Studies in Education (1988 to 1994). The following is a conversation we had with Dorothy Gilford in March of 2004. We found her to be an interesting person and a remarkable statistician. We hope you agree.
Title: Node discovery problem for a social network
Abstract: Methods to solve a node discovery problem for a social network are presented. Covert nodes refer to the nodes which are not observable directly. They transmit the influence and affect the resulting collaborative activities among the persons in a social network, but do not appear in the surveillance logs which record the participants of the collaborative activities. Discovering the covert nodes is identifying the suspicious logs where the covert nodes would appear if the covert nodes became overt. The performance of the methods is demonstrated with a test dataset generated from computationally synthesized networks and a real organization.
Title: The entropy of keys derived from laser speckle
Abstract: Laser speckle has been proposed in a number of papers as a high-entropy source of unpredictable bits for use in security applications. Bit strings derived from speckle can be used for a variety of security purposes such as identification, authentication, anti-counterfeiting, secure key storage, random number generation and tamper protection. The choice of laser speckle as a source of random keys is quite natural, given the chaotic properties of speckle. However, this same chaotic behaviour also causes reproducibility problems. Cryptographic protocols require either zero noise or very low noise in their inputs; hence the issue of error rates is critical to applications of laser speckle in cryptography. Most of the literature uses an error reduction method based on Gabor filtering. Though the method is successful, it has not been thoroughly analysed. In this paper we present a statistical analysis of Gabor-filtered speckle patterns. We introduce a model in which perturbations are described as random phase changes in the source plane. Using this model we compute the second and fourth order statistics of Gabor coefficients. We determine the mutual information between perturbed and unperturbed Gabor coefficients and the bit error rate in the derived bit string. The mutual information provides an absolute upper bound on the number of secure bits that can be reproducibly extracted from noisy measurements.
Title: Struggles with Survey Weighting and Regression Modeling
Abstract: The general principles of Bayesian data analysis imply that models for survey responses should be constructed conditional on all variables that affect the probability of inclusion and nonresponse, which are also the variables used in survey weighting and clustering. However, such models can quickly become very complicated, with potentially thousands of poststratification cells. It is then a challenge to develop general families of multilevel probability models that yield reasonable Bayesian inferences. We discuss in the context of several ongoing public health and social surveys. This work is currently open-ended, and we conclude with thoughts on how research could proceed to solve these problems.
Title: Comment: Struggles with Survey Weighting and Regression Modeling
Abstract: Comment: Struggles with Survey Weighting and Regression Modeling [arXiv:0710.5005]
Title: Comment: Struggles with Survey Weighting and Regression Modeling
Abstract: Comment: Struggles with Survey Weighting and Regression Modeling [arXiv:0710.5005]
Title: Comment: Struggles with Survey Weighting and Regression Modeling
Abstract: Comment: Struggles with Survey Weighting and Regression Modeling [arXiv:0710.5005]
Title: Comment: Struggles with Survey Weighting and Regression Modeling
Abstract: Comment: Struggles with Survey Weighting and Regression Modeling [arXiv:0710.5005]
Title: Comment: Struggles with Survey Weighting and Regression Modeling
Abstract: Comment: Struggles with Survey Weighting and Regression Modeling [arXiv:0710.5005]
Title: Rejoinder: Struggles with Survey Weighting and Regression Modeling
Abstract: Rejoinder: Struggles with Survey Weighting and Regression Modeling [arXiv:0710.5005]
Title: The William Kruskal Legacy: 1919--2005
Abstract: William Kruskal (Bill) was a distinguished statistician who spent virtually his entire professional career at the University of Chicago, and who had a lasting impact on the Institute of Mathematical Statistics and on the field of statistics more broadly, as well as on many who came in contact with him. Bill passed away last April following an extended illness, and on May 19, 2005, the University of Chicago held a memorial service at which several of Bill's colleagues and collaborators spoke along with members of his family and other friends. This biography and the accompanying commentaries derive in part from brief presentations on that occasion, along with recollections and input from several others. Bill was known personally to most of an older generation of statisticians as an editor and as an intellectual and professional leader. In 1994, Statistical Science published an interview by Sandy Zabell (Vol. 9, 285--303) in which Bill looked back on selected events in his professional life. One of the purposes of the present biography and accompanying commentaries is to reintroduce him to old friends and to introduce him for the first time to new generations of statisticians who never had an opportunity to interact with him and to fall under his influence.
Title: A Tribute to Bill Kruskal
Abstract: Discussion of ``The William Kruskal Legacy: 1919--2005'' by Stephen E. Fienberg, Stephen M. Stigler and Judith M. Tanur [arXiv:0710.5063]
Title: William H. Kruskal and the Development of Coordinate-Free Methods
Abstract: Discussion of ``The William Kruskal Legacy: 1919--2005'' by Stephen E. Fienberg, Stephen M. Stigler and Judith M. Tanur [arXiv:0710.5063]
Title: William Kruskal: My Scholarly and Scientific Model
Abstract: Discussion of ``The William Kruskal Legacy: 1919--2005'' by Stephen E. Fienberg, Stephen M. Stigler and Judith M. Tanur [arXiv:0710.5063]
Title: Working with Bill Kruskal: From 1950 Onward
Abstract: Discussion of ``The William Kruskal Legacy: 1919--2005'' by Stephen E. Fienberg, Stephen M. Stigler and Judith M. Tanur [arXiv:0710.5063]
Title: Bill Kruskal and the Committee on National Statistics
Abstract: Discussion of ``The William Kruskal Legacy: 1919--2005'' by Stephen E. Fienberg, Stephen M. Stigler and Judith M. Tanur [arXiv:0710.5063]
Title: William Kruskal Remembered
Abstract: Discussion of ``The William Kruskal Legacy: 1919--2005'' by Stephen E. Fienberg, Stephen M. Stigler and Judith M. Tanur [arXiv:0710.5063]
Title: William H. Kruskal, Mentor and Friend
Abstract: Discussion of ``The William Kruskal Legacy: 1919--2005'' by Stephen E. Fienberg, Stephen M. Stigler and Judith M. Tanur [arXiv:0710.5063]
Title: Particle Filters for Multiscale Diffusions
Abstract: We consider multiscale stochastic systems that are partially observed at discrete points of the slow time scale. We introduce a particle filter that takes advantage of the multiscale structure of the system to efficiently approximate the optimal filter.
Title: Combining haplotypers
Abstract: Statistically resolving the underlying haplotype pair for a genotype measurement is an important intermediate step in gene mapping studies, and has received much attention recently. Consequently, a variety of methods for this problem have been developed. Different methods employ different statistical models, and thus implicitly encode different assumptions about the nature of the underlying haplotype structure. Depending on the population sample in question, their relative performance can vary greatly, and it is unclear which method to choose for a particular sample. Instead of choosing a single method, we explore combining predictions returned by different methods in a principled way, and thereby circumvent the problem of method selection. We propose several techniques for combining haplotype reconstructions and analyze their computational properties. In an experimental study on real-world haplotype data we show that such techniques can provide more accurate and robust reconstructions, and are useful for outlier detection. Typically, the combined prediction is at least as accurate as or even more accurate than the best individual method, effectively circumventing the method selection problem.
Title: A geometric approach to maximum likelihood estimation of the functional principal components from sparse longitudinal data
Abstract: In this paper, we consider the problem of estimating the eigenvalues and eigenfunctions of the covariance kernel (i.e., the functional principal components) from sparse and irregularly observed longitudinal data. We approach this problem through a maximum likelihood method assuming that the covariance kernel is smooth and finite dimensional. We exploit the smoothness of the eigenfunctions to reduce dimensionality by restricting them to a lower dimensional space of smooth functions. The estimation scheme is developed based on a Newton-Raphson procedure using the fact that the basis coefficients representing the eigenfunctions lie on a Stiefel manifold. We also address the selection of the right number of basis functions, as well as that of the dimension of the covariance kernel by a second order approximation to the leave-one-curve-out cross-validation score that is computationally very efficient. The effectiveness of our procedure is demonstrated by simulation studies and an application to a CD4 counts data set. In the simulation studies, our method performs well on both estimation and model selection. It also outperforms two existing approaches: one based on a local polynomial smoothing of the empirical covariances, and another using an EM algorithm.
Title: Some Reflections on the Task of Content Determination in the Context of Multi-Document Summarization of Evolving Events
Abstract: Despite its importance, the task of summarizing evolving events has received small attention by researchers in the field of multi-document summariztion. In a previous paper (Afantenos et al. 2007) we have presented a methodology for the automatic summarization of documents, emitted by multiple sources, which describe the evolution of an event. At the heart of this methodology lies the identification of similarities and differences between the various documents, in two axes: the synchronic and the diachronic. This is achieved by the introduction of the notion of Synchronic and Diachronic Relations. Those relations connect the messages that are found in the documents, resulting thus in a graph which we call grid. Although the creation of the grid completes the Document Planning phase of a typical NLG architecture, it can be the case that the number of messages contained in a grid is very large, exceeding thus the required compression rate. In this paper we provide some initial thoughts on a probabilistic model which can be applied at the Content Determination stage, and which tries to alleviate this problem.
Title: Parameter Estimation for Partially Observed Hypoelliptic Diffusions
Abstract: Hypoelliptic diffusion processes can be used to model a variety of phenomena in applications ranging from molecular dynamics to audio signal analysis. We study parameter estimation for such processes in situations where we observe some components of the solution at discrete times. Since exact likelihoods for the transition densities are typically not known, approximations are used that are expected to work well in the limit of small inter-sample times $\Delta t$ and large total observation times $N\Delta t$. Hypoellipticity together with partial observation leads to ill-conditioning requiring a judicious combination of approximate likelihoods for the various parameters to be estimated. We combine these in a deterministic scan Gibbs sampler alternating between missing data in the unobserved solution components, and parameters. Numerical experiments illustrate asymptotic consistency of the method when applied to simulated data. The paper concludes with application of the Gibbs sampler to molecular dynamics data.
Title: Discriminated Belief Propagation
Abstract: Near optimal decoding of good error control codes is generally a difficult task. However, for a certain type of (sufficiently) good codes an efficient decoding algorithm with near optimal performance exists. These codes are defined via a combination of constituent codes with low complexity trellis representations. Their decoding algorithm is an instance of (loopy) belief propagation and is based on an iterative transfer of constituent beliefs. The beliefs are thereby given by the symbol probabilities computed in the constituent trellises. Even though weak constituent codes are employed close to optimal performance is obtained, i.e., the encoder/decoder pair (almost) achieves the information theoretic capacity. However, (loopy) belief propagation only performs well for a rather specific set of codes, which limits its applicability. In this paper a generalisation of iterative decoding is presented. It is proposed to transfer more values than just the constituent beliefs. This is achieved by the transfer of beliefs obtained by independently investigating parts of the code space. This leads to the concept of discriminators, which are used to improve the decoder resolution within certain areas and defines discriminated symbol beliefs. It is shown that these beliefs approximate the overall symbol probabilities. This leads to an iteration rule that (below channel capacity) typically only admits the solution of the overall decoding problem. Via a Gauss approximation a low complexity version of this algorithm is derived. Moreover, the approach may then be applied to a wide range of channel maps without significant complexity increase.
Title: Code Similarity on High Level Programs
Abstract: This paper presents a new approach for code similarity on High Level programs. Our technique is based on Fast Dynamic Time Warping, that builds a warp path or points relation with local restrictions. The source code is represented into Time Series using the operators inside programming languages that makes possible the comparison. This makes possible subsequence detection that represent similar code instructions. In contrast with other code similarity algorithms, we do not make features extraction. The experiments show that two source codes are similar when their respective Time Series are similar.
Title: An Elegant Method for Generating Multivariate Poisson Random Variable
Abstract: Generating multivariate Poisson data is essential in many applications. Current simulation methods suffer from limitations ranging from computational complexity to restrictions on the structure of the correlation matrix. We propose a computationally efficient and conceptually appealing method for generating multivariate Poisson data. The method is based on simulating multivariate Normal data and converting them to achieve a specific correlation matrix and Poisson rate vector. This allows for generating data that have positive or negative correlations as well as different rates.
Title: Nonparametric Conditional Inference for Regression Coefficients with Application to Configural Polysampling
Abstract: We consider inference procedures, conditional on an observed ancillary statistic, for regression coefficients under a linear regression setup where the unknown error distribution is specified nonparametrically. We establish conditional asymptotic normality of the regression coefficient estimators under regularity conditions, and formally justify the approach of plugging in kernel-type density estimators in conditional inference procedures. Simulation results show that the approach yields accurate conditional coverage probabilities when used for constructing confidence intervals. The plug-in approach can be applied in conjunction with configural polysampling to derive robust conditional estimators adaptive to a confrontation of contrasting scenarios. We demonstrate this by investigating the conditional mean squared error of location estimators under various confrontations in a simulation study, which successfully extends configural polysampling to a nonparametric context.
Title: On estimating covariances between many assets with histories of highly variable length
Abstract: Quantitative portfolio allocation requires the accurate and tractable estimation of covariances between a large number of assets, whose histories can greatly vary in length. Such data are said to follow a monotone missingness pattern, under which the likelihood has a convenient factorization. Upon further assuming that asset returns are multivariate normally distributed, with histories at least as long as the total asset count, maximum likelihood (ML) estimates are easily obtained by performing repeated ordinary least squares (OLS) regressions, one for each asset. Things get more interesting when there are more assets than historical returns. OLS becomes unstable due to rank--deficient design matrices, which is called a "big p small n" problem. We explore remedies that involve making a change of basis, as in principal components or partial least squares regression, or by applying shrinkage methods like ridge regression or the lasso. This enables the estimation of covariances between large sets of assets with histories of essentially arbitrary length, and offers improvements in accuracy and interpretation. We further extend the method by showing how external factors can be incorporated. This allows for the adaptive use of factors without the restrictive assumptions common in factor models. Our methods are demonstrated on randomly generated data, and then benchmarked by the performance of balanced portfolios using real historical financial returns. An accompanying R package called monomvn, containing code implementing the estimators described herein, has been made freely available on CRAN.
Title: 2-level fractional factorial designs which are the union of non trivial regular designs
Abstract: Every fraction is a union of points, which are trivial regular fractions. To characterize non trivial decomposition, we derive a condition for the inclusion of a regular fraction as follows. Let $F = \sum_\alpha b_\alpha X^\alpha$ be the indicator polynomial of a generic fraction, see Fontana et al, JSPI 2000, 149-172. Regular fractions are characterized by $R = \frac 1l \sum_\alpha \in \mathcal L e_\alpha X^\alpha$, where $\alpha \mapsto e_\alpha$ is an group homeomorphism from $\mathcal L \subset \mathbb Z_2^d$ into $\-1,+1\$. The regular $R$ is a subset of the fraction $F$ if $FR = R$, which in turn is equivalent to $\sum_t F(t)R(t) = \sum_t R(t)$. If $\mathcal H = \\alpha_1 >... \alpha_k\$ is a generating set of $\mathcal L$, and $R = \frac12^k(1 + e_1X^\alpha_1) ... (1 + e_kX^\alpha_k)$, $e_j = \pm 1$, $j=1 ... k$, the inclusion condition in term of the $b_\alpha$'s is % b_0 + e_1 b_\alpha_1 + >... + e_1 ... e_k b_\alpha_1 + ... + \alpha_k = 1. % The last part of the paper will discuss some examples to investigate the practical applicability of the previous condition (*). This paper is an offspring of the Alcotra 158 EU research contract on the planning of sequential designs for sample surveys in tourism statistics.
Title: Implementing Quasi-Monte Carlo Simulations with Linear Transformations
Abstract: Pricing exotic multi-asset path-dependent options requires extensive Monte Carlo simulations. In the recent years the interest to the Quasi-monte Carlo technique has been renewed and several results have been proposed in order to improve its efficiency with the notion of effective dimension. To this aim, Imai and Tan introduced a general variance reduction technique in order to minimize the nominal dimension of the Monte Carlo method. Taking into account these advantages, we investigate this approach in detail in order to make it faster from the computational point of view. Indeed, we realize the linear transformation decomposition relying on a fast ad hoc QR decomposition that considerably reduces the computational burden. This setting makes the linear transformation method even more convenient from the computational point of view. We implement a high-dimensional (2500) Quasi-Monte Carlo simulation combined with the linear transformation in order to price Asian basket options with same set of parameters published by Imai and Tan. For the simulation of the high-dimensional random sample, we use a 50-dimensional scrambled Sobol sequence for the first 50 components, determined by the linear transformation method, and pad the remaining ones out by the Latin Hypercube Sampling. The aim of this numerical setting is to investigate the accuracy of the estimation by giving a higher convergence rate only to those components selected by the linear transformation technique. We launch our simulation experiment also using the standard Cholesky and the principal component decomposition methods with pseudo-random and Latin Hypercube sampling generators. Finally, we compare our results and computational times, with those presented in Imai and Tan.
Title: Some aspects of extreme value theory under serial dependence
Abstract: On the occasion of Laurens de Haan's 70th birthday, we discuss two aspects of the statistical inference on the extreme value behavior of time series with a particular emphasis on his important contributions. First, the performance of a direct marginal tail analysis is compared with that of a model-based approach using an analysis of residuals. Second, the importance of the extremal index as a measure of the serial extremal dependence is discussed by the example of solutions of a stochastic recurrence equation.
Title: Supervised Machine Learning with a Novel Pointwise Density Estimator
Abstract: This article proposes a novel density estimation based algorithm for carrying out supervised machine learning. The proposed algorithm features O(n) time complexity for generating a classifier, where n is the number of sampling instances in the training dataset. This feature is highly desirable in contemporary applications that involve large and still growing databases. In comparison with the kernel density estimation based approaches, the mathe-matical fundamental behind the proposed algorithm is not based on the assump-tion that the number of training instances approaches infinite. As a result, a classifier generated with the proposed algorithm may deliver higher prediction accuracy than the kernel density estimation based classifier in some cases.