text
stringlengths 0
4.09k
|
|---|
Title: From formulas to cirquents in computability logic
|
Abstract: Computability logic (CoL) (see http://www.cis.upenn.edu/ giorgi/cl.html) is a recently introduced semantical platform and ambitious program for redeveloping logic as a formal theory of computability, as opposed to the formal theory of truth that logic has more traditionally been. Its expressions represent interactive computational tasks seen as games played by a machine against the environment, and "truth" is understood as existence of an algorithmic winning strategy. With logical operators standing for operations on games, the formalism of CoL is open-ended, and has already undergone series of extensions. This article extends the expressive power of CoL in a qualitatively new way, generalizing formulas (to which the earlier languages of CoL were limited) to circuit-style structures termed cirquents. The latter, unlike formulas, are able to account for subgame/subtask sharing between different parts of the overall game/task. Among the many advantages offered by this ability is that it allows us to capture, refine and generalize the well known independence-friendly logic which, after the present leap forward, naturally becomes a conservative fragment of CoL, just as classical logic had been known to be a conservative fragment of the formula-based version of CoL. Technically, this paper is self-contained, and can be read without any prior familiarity with CoL.
|
Title: Characterising equilibrium logic and nested logic programs: Reductions and complexity
|
Abstract: Equilibrium logic is an approach to nonmonotonic reasoning that extends the stable-model and answer-set semantics for logic programs. In particular, it includes the general case of nested logic programs, where arbitrary Boolean combinations are permitted in heads and bodies of rules, as special kinds of theories. In this paper, we present polynomial reductions of the main reasoning tasks associated with equilibrium logic and nested logic programs into quantified propositional logic, an extension of classical propositional logic where quantifications over atomic formulas are permitted. We provide reductions not only for decision problems, but also for the central semantical concepts of equilibrium logic and nested logic programs. In particular, our encodings map a given decision problem into some formula such that the latter is valid precisely in case the former holds. The basic tasks we deal with here are the consistency problem, brave reasoning, and skeptical reasoning. Additionally, we also provide encodings for testing equivalence of theories or programs under different notions of equivalence, viz. ordinary, strong, and uniform equivalence. For all considered reasoning tasks, we analyse their computational complexity and give strict complexity bounds.
|
Title: Reconstructing DNA copy number by penalized estimation and imputation
|
Abstract: Recent advances in genomics have underscored the surprising ubiquity of DNA copy number variation (CNV). Fortunately, modern genotyping platforms also detect CNVs with fairly high reliability. Hidden Markov models and algorithms have played a dominant role in the interpretation of CNV data. Here we explore CNV reconstruction via estimation with a fused-lasso penalty as suggested by Tibshirani and Wang [Biostatistics 9 (2008) 18--29]. We mount a fresh attack on this difficult optimization problem by the following: (a) changing the penalty terms slightly by substituting a smooth approximation to the absolute value function, (b) designing and implementing a new MM (majorization--minimization) algorithm, and (c) applying a fast version of Newton's method to jointly update all model parameters. Together these changes enable us to minimize the fused-lasso criterion in a highly effective way. We also reframe the reconstruction problem in terms of imputation via discrete optimization. This approach is easier and more accurate than parameter estimation because it relies on the fact that only a handful of possible copy number states exist at each SNP. The dynamic programming framework has the added bonus of exploiting information that the current fused-lasso approach ignores. The accuracy of our imputations is comparable to that of hidden Markov models at a substantially lower computational cost.
|
Title: A Neural Network Classifier of Volume Datasets
|
Abstract: Many state-of-the art visualization techniques must be tailored to the specific type of dataset, its modality (CT, MRI, etc.), the recorded object or anatomical region (head, spine, abdomen, etc.) and other parameters related to the data acquisition process. While parts of the information (imaging modality and acquisition sequence) may be obtained from the meta-data stored with the volume scan, there is important information which is not stored explicitly (anatomical region, tracing compound). Also, meta-data might be incomplete, inappropriate or simply missing. This paper presents a novel and simple method of determining the type of dataset from previously defined categories. 2D histograms based on intensity and gradient magnitude of datasets are used as input to a neural network, which classifies it into one of several categories it was trained with. The proposed method is an important building block for visualization systems to be used autonomously by non-experts. The method has been tested on 80 datasets, divided into 3 classes and a "rest" class. A significant result is the ability of the system to classify datasets into a specific class after being trained with only one dataset of that class. Other advantages of the method are its easy implementation and its high computational performance.
|
Title: Properties of quasi-alphabetic tree bimorphisms
|
Abstract: We study the class of quasi-alphabetic relations, i.e., tree transformations defined by tree bimorphisms with two quasi-alphabetic tree homomorphisms and a regular tree language. We present a canonical representation of these relations; as an immediate consequence, we get the closure under union. Also, we show that they are not closed under intersection and complement, and do not preserve most common operations on trees (branches, subtrees, v-product, v-quotient, f-top-catenation). Moreover, we prove that the translations defined by quasi-alphabetic tree bimorphism are exactly products of context-free string languages. We conclude by presenting the connections between quasi-alphabetic relations, alphabetic relations and classes of tree transformations defined by several types of top-down tree transducers. Furthermore, we get that quasi-alphabetic relations preserve the recognizable and algebraic tree languages.
|
Title: Without a 'doubt'? Unsupervised discovery of downward-entailing operators
|
Abstract: An important part of textual inference is making deductions involving monotonicity, that is, determining whether a given assertion entails restrictions or relaxations of that assertion. For instance, the statement 'We know the epidemic spread quickly' does not entail 'We know the epidemic spread quickly via fleas', but 'We doubt the epidemic spread quickly' entails 'We doubt the epidemic spread quickly via fleas'. Here, we present the first algorithm for the challenging lexical-semantics problem of learning linguistic constructions that, like 'doubt', are downward entailing (DE). Our algorithm is unsupervised, resource-lean, and effective, accurately recovering many DE operators that are missing from the hand-constructed lists that textual-inference systems currently use.
|
Title: Exact Indexing for Massive Time Series Databases under Time Warping Distance
|
Abstract: Among many existing distance measures for time series data, Dynamic Time Warping (DTW) distance has been recognized as one of the most accurate and suitable distance measures due to its flexibility in sequence alignment. However, DTW distance calculation is computationally intensive. Especially in very large time series databases, sequential scan through the entire database is definitely impractical, even with random access that exploits some index structures since high dimensionality of time series data incurs extremely high I/O cost. More specifically, a sequential structure consumes high CPU but low I/O costs, while an index structure requires low CPU but high I/O costs. In this work, we therefore propose a novel indexed sequential structure called TWIST (Time Warping in Indexed Sequential sTructure) which benefits from both sequential access and index structure. When a query sequence is issued, TWIST calculates lower bounding distances between a group of candidate sequences and the query sequence, and then identifies the data access order in advance, hence reducing a great number of both sequential and random accesses. Impressively, our indexed sequential structure achieves significant speedup in a querying process by a few orders of magnitude. In addition, our method shows superiority over existing rival methods in terms of query processing time, number of page accesses, and storage requirement with no false dismissal guaranteed.
|
Title: Observed Universality of Phase Transitions in High-Dimensional Geometry, with Implications for Modern Data Analysis and Signal Processing
|
Abstract: We review connections between phase transitions in high-dimensional combinatorial geometry and phase transitions occurring in modern high-dimensional data analysis and signal processing. In data analysis, such transitions arise as abrupt breakdown of linear model selection, robust data fitting or compressed sensing reconstructions, when the complexity of the model or the number of outliers increases beyond a threshold. In combinatorial geometry these transitions appear as abrupt changes in the properties of face counts of convex polytopes when the dimensions are varied. The thresholds in these very different problems appear in the same critical locations after appropriate calibration of variables. These thresholds are important in each subject area: for linear modelling, they place hard limits on the degree to which the now-ubiquitous high-throughput data analysis can be successful; for robustness, they place hard limits on the degree to which standard robust fitting methods can tolerate outliers before breaking down; for compressed sensing, they define the sharp boundary of the undersampling/sparsity tradeoff in undersampling theorems. Existing derivations of phase transitions in combinatorial geometry assume the underlying matrices have independent and identically distributed (iid) Gaussian elements. In applications, however, it often seems that Gaussianity is not required. We conducted an extensive computational experiment and formal inferential analysis to test the hypothesis that these phase transitions are \it universal across a range of underlying matrix ensembles. The experimental results are consistent with an asymptotic large-$n$ universality across matrix ensembles; finite-sample universality can be rejected.
|
Title: Bayesian History Reconstruction of Complex Human Gene Clusters on a Phylogeny
|
Abstract: Clusters of genes that have evolved by repeated segmental duplication present difficult challenges throughout genomic analysis, from sequence assembly to functional analysis. Improved understanding of these clusters is of utmost importance, since they have been shown to be the source of evolutionary innovation, and have been linked to multiple diseases, including HIV and a variety of cancers. Previously, Zhang et al. (2008) developed an algorithm for reconstructing parsimonious evolutionary histories of such gene clusters, using only human genomic sequence data. In this paper, we propose a probabilistic model for the evolution of gene clusters on a phylogeny, and an MCMC algorithm for reconstruction of duplication histories from genomic sequences in multiple species. Several projects are underway to obtain high quality BAC-based assemblies of duplicated clusters in multiple species, and we anticipate that our method will be useful in analyzing these valuable new data sets.
|
Title: Evaluating Health Risk Models
|
Abstract: Interest in targeted disease prevention has stimulated development of models that assign risks to individuals, using their personal covariates. We need to evaluate these models, and to quantify the gains achieved by expanding a model with additional covariates. We describe several performance measures for risk models, and show how they are related. Application of the measures to risk models for hypothetical populations and for postmenopausal US women illustrate several points. First, model performance is constrained by the distribution of true risks in the population. This complicates the comparison of two models if they are applied to populations with different covariate distributions. Second, the Brier Score and the Integrated Discrimination Improvement (IDI) are more useful than the concordance statistic for quantifying precision gains obtained from model expansion. Finally, these precision gains are apt to be small, although they may be large for some individuals. We propose a new way to identify these individuals, and show how to quantify how much they gain by measuring the additional covariates. Those with largest gains could be targeted for cost-efficient covariate assessment.
|
Title: Maximal digital straight segments and convergence of discrete geometric estimators
|
Abstract: Discrete geometric estimators approach geometric quantities on digitized shapes without any knowledge of the continuous shape. A classical yet difficult problem is to show that an estimator asymptotically converges toward the true geometric quantity as the resolution increases. We study here the convergence of local estimators based on Digital Straight Segment (DSS) recognition. It is closely linked to the asymptotic growth of maximal DSS, for which we show bounds both about their number and sizes. These results not only give better insights about digitized curves but indicate that curvature estimators based on local DSS recognition are not likely to converge. We indeed invalidate an hypothesis which was essential in the only known convergence theorem of a discrete curvature estimator. The proof involves results from arithmetic properties of digital lines, digital convexity, combinatorics, continued fractions and random polytopes.
|
Title: Coding cells of digital spaces: a framework to write generic digital topology algorithms
|
Abstract: This paper proposes a concise coding of the cells of n-dimensional finite regular grids. It induces a simple, generic and efficient framework for implementing classical digital topology data structures and algorithms. Discrete subsets of multidimensional images (e.g. regions, digital surfaces, cubical cell complexes) have then a common and compact representation. Moreover, algorithms have a straightforward and efficient implementation, which is independent from the dimension or sizes of digital images. We illustrate that point with generic hypersurface boundary extraction algorithms by scanning or tracking. This framework has been implemented and basic operations as well as the presented applications have been benchmarked.
|
Title: Combinatorial pyramids and discrete geometry for energy-minimizing segmentation
|
Abstract: This paper defines the basis of a new hierarchical framework for segmentation algorithms based on energy minimization schemes. This new framework is based on two formal tools. First, a combinatorial pyramid encode efficiently a hierarchy of partitions. Secondly, discrete geometric estimators measure precisely some important geometric parameters of the regions. These measures combined with photometrical and topological features of the partition allows to design energy terms based on discrete measures. Our segmentation framework exploits these energies to build a pyramid of image partitions with a minimization scheme. Some experiments illustrating our framework are shown and discussed.
|
Title: What Does Artificial Life Tell Us About Death?
|
Abstract: Short philosophical essay
|
Title: Employing Wikipedia's Natural Intelligence For Cross Language Information Retrieval
|
Abstract: In this paper we present a novel method for retrieving information in languages other than that of the query. We use this technique in combination with existing traditional Cross Language Information Retrieval (CLIR) techniques to improve their results. This method has a number of advantages over traditional techniques that rely on machine translation to translate the query and then search the target document space using a machine translation. This method is not limited to the availability of a machine translation algorithm for the desired language and uses already existing sources of readily available translated information on the internet as a "middle-man" approach. In this paper we use Wikipedia; however, any similar multilingual, cross referenced body of documents can be used. For evaluation and comparison purposes we also implemented a traditional machine translation approach separately as well as the Wikipedia approach separately.
|
Title: Noisy Independent Factor Analysis Model for Density Estimation and Classification
|
Abstract: We consider the problem of multivariate density estimation when the unknown density is assumed to follow a particular form of dimensionality reduction, a noisy independent factor analysis (IFA) model. In this model the data are generated by a number of latent independent components having unknown distributions and are observed in Gaussian noise. We do not assume that either the number of components or the matrix mixing the components are known. We show that the densities of this form can be estimated with a fast rate. Using the mirror averaging aggregation algorithm, we construct a density estimator which achieves a nearly parametric rate log^(1/4)n/sqrt(n), independent of the dimensionality of the data, as the sample size $n$ tends to infinity. This estimator is adaptive to the number of components, their distributions and the mixing matrix. We then apply this density estimator to construct nonparametric plug-in classifiers and show that they achieve the best obtainable rate of the excess Bayes risk, to within a logarithmic factor independent of the dimension of the data. Applications of this classifier to simulated data sets and to real data from a remote sensing experiment show promising results.
|
Title: Entropy Message Passing
|
Abstract: The paper proposes a new message passing algorithm for cycle-free factor graphs. The proposed "entropy message passing" (EMP) algorithm may be viewed as sum-product message passing over the entropy semiring, which has previously appeared in automata theory. The primary use of EMP is to compute the entropy of a model. However, EMP can also be used to compute expressions that appear in expectation maximization and in gradient descent algorithms.
|
Title: Testing for Homogeneity in Meta-Analysis I. The One Parameter Case: Standardized Mean Difference
|
Abstract: Meta-analysis seeks to combine the results of several experiments in order to improve the accuracy of decisions. It is common to use a test for homogeneity to determine if the results of the several experiments are sufficiently similar to warrant their combination into an overall result. Cochran's Q statistic is frequently used for this homogeneity test. It is often assumed that Q follows a chi-square distribution under the null hypothesis of homogeneity, but it has long been known that this asymptotic distribution for Q is not accurate for moderate sample sizes. Here we present formulas for the mean and variance of Q under the null hypothesis which represent O(1/n) corrections to the corresponding chi-square moments in the one parameter case. The formulas are fairly complicated, and so we provide a program (available at http://www.imperial.ac.uk/stathelp/researchprojects/metaanalysis) for making the necessary calculations. We apply the results to the standardized mean difference (Cohen's d-statistic) and consider two approximations: a gamma distribution with estimated shape and scale parameters and the chi-square distribution with fractional degrees of freedom equal to the estimated mean of Q. We recommend the latter distribution as an approximate distribution for Q to use for testing the null hypothesis.
|
Title: Mnesors for automatic control
|
Abstract: Mnesors are defined as elements of a semimodule over the min-plus integers. This two-sorted structure is able to merge graduation properties of vectors and idempotent properties of boolean numbers, which makes it appropriate for hybrid systems. We apply it to the control of an inverted pendulum and design a full logical controller, that is, without the usual algebra of real numbers.
|
Title: Deformable Model with a Complexity Independent from Image Resolution
|
Abstract: We present a parametric deformable model which recovers image components with a complexity independent from the resolution of input images. The proposed model also automatically changes its topology and remains fully compatible with the general framework of deformable models. More precisely, the image space is equipped with a metric that expands salient image details according to their strength and their curvature. During the whole evolution of the model, the sampling of the contour is kept regular with respect to this metric. By this way, the vertex density is reduced along most parts of the curve while a high quality of shape representation is preserved. The complexity of the deformable model is thus improved and is no longer influenced by feature-preserving changes in the resolution of input images. Building the metric requires a prior estimation of contour curvature. It is obtained using a robust estimator which investigates the local variations in the orientation of image gradient. Experimental results on both computer generated and biomedical images are presented to illustrate the advantages of our approach.
|
Title: Minimax rank estimation for subspace tracking
|
Abstract: Rank estimation is a classical model order selection problem that arises in a variety of important statistical signal and array processing systems, yet is addressed relatively infrequently in the extant literature. Here we present sample covariance asymptotics stemming from random matrix theory, and bring them to bear on the problem of optimal rank estimation in the context of the standard array observation model with additive white Gaussian noise. The most significant of these results demonstrates the existence of a phase transition threshold, below which eigenvalues and associated eigenvectors of the sample covariance fail to provide any information on population eigenvalues. We then develop a decision-theoretic rank estimation framework that leads to a simple ordered selection rule based on thresholding; in contrast to competing approaches, however, it admits asymptotic minimax optimality and is free of tuning parameters. We analyze the asymptotic performance of our rank selection procedure and conclude with a brief simulation study demonstrating its practical efficacy in the context of subspace tracking.
|
Title: Semi-Myopic Sensing Plans for Value Optimization
|
Abstract: We consider the following sequential decision problem. Given a set of items of unknown utility, we need to select one of as high a utility as possible (``the selection problem''). Measurements (possibly noisy) of item values prior to selection are allowed, at a known cost. The goal is to optimize the overall sequential decision process of measurements and selection. Value of information (VOI) is a well-known scheme for selecting measurements, but the intractability of the problem typically leads to using myopic VOI estimates. In the selection problem, myopic VOI frequently badly underestimates the value of information, leading to inferior sensing plans. We relax the strict myopic assumption into a scheme we term semi-myopic, providing a spectrum of methods that can improve the performance of sensing plans. In particular, we propose the efficiently computable method of ``blinkered'' VOI, and examine theoretical bounds for special cases. Empirical evaluation of ``blinkered'' VOI in the selection problem with normally distributed item values shows that is performs much better than pure myopic VOI.
|
Title: Variable selection in high-dimensional linear models: partially faithful distributions and the PC-simple algorithm
|
Abstract: We consider variable selection in high-dimensional linear models where the number of covariates greatly exceeds the sample size. We introduce the new concept of partial faithfulness and use it to infer associations between the covariates and the response. Under partial faithfulness, we develop a simplified version of the PC algorithm (Spirtes et al., 2000), the PC-simple algorithm, which is computationally feasible even with thousands of covariates and provides consistent variable selection under conditions on the random design matrix that are of a different nature than coherence conditions for penalty-based approaches like the Lasso. Simulations and application to real data show that our method is competitive compared to penalty-based approaches. We provide an efficient implementation of the algorithm in the R-package pcalg.
|
Title: Reduction algorithm for the NPMLE for the distribution function of bivariate interval censored data
|
Abstract: We study computational aspects of the nonparametric maximum likelihood estimator (NPMLE) for the distribution function of bivariate interval censored data. The computation of the NPMLE consists of two steps: a parameter reduction step and an optimization step. In this paper we focus on the reduction step. We introduce two new reduction algorithms: the Tree algorithm and the HeightMap algorithm. The Tree algorithm is only mentioned briefly. The HeightMap algorithm is discussed in detail and also given in pseudo code. It is a very fast and simple algorithm of time complexity O(n^2). This is an order faster than the best known algorithm thus far, the O(n^3) algorithm of Bogaerts and Lesaffre (2003). We compare our algorithms with the algorithms of Gentleman and Vandal (2001), Song (2001) and Bogaerts and Lesaffre (2003), using simulated data. We show that our algorithms, and especially the HeightMap algorithm, are significantly faster. Finally, we point out that the HeightMap algorithm can be easily generalized to d-dimensional data with d>2. Such a multivariate version of the HeightMap algorithm has time complexity O(n^d).
|
Title: Generation of Fractional Factorial Designs
|
Abstract: The joint use of counting functions, Hilbert basis and Markov basis allows to define a procedure to generate all the fractions that satisfy a given set of constraints in terms of orthogonality. The general case of mixed level designs, without restrictions on the number of levels of each factor (like primes or power of primes) is studied. This new methodology has been experimented on some significant classes of fractional factorial designs, including mixed level orthogonal arrays.
|
Title: Adaptive Regularization of Ill-Posed Problems: Application to Non-rigid Image Registration
|
Abstract: We introduce an adaptive regularization approach. In contrast to conventional Tikhonov regularization, which specifies a fixed regularization operator, we estimate it simultaneously with parameters. From a Bayesian perspective we estimate the prior distribution on parameters assuming that it is close to some given model distribution. We constrain the prior distribution to be a Gauss-Markov random field (GMRF), which allows us to solve for the prior distribution analytically and provides a fast optimization algorithm. We apply our approach to non-rigid image registration to estimate the spatial transformation between two images. Our evaluation shows that the adaptive regularization approach significantly outperforms standard variational methods.
|
Title: AIS for Misbehavior Detection in Wireless Sensor Networks: Performance and Design Principles
|
Abstract: A sensor network is a collection of wireless devices that are able to monitor physical or environmental conditions. These devices (nodes) are expected to operate autonomously, be battery powered and have very limited computational capabilities. This makes the task of protecting a sensor network against misbehavior or possible malfunction a challenging problem. In this document we discuss performance of Artificial immune systems (AIS) when used as the mechanism for detecting misbehavior. We show that (i) mechanism of the AIS have to be carefully applied in order to avoid security weaknesses, (ii) the choice of genes and their interaction have a profound influence on the performance of the AIS, (iii) randomly created detectors do not comply with limitations imposed by communications protocols and (iv) the data traffic pattern seems not to impact significantly the overall performance. We identified a specific MAC layer based gene that showed to be especially useful for detection; genes measure a network's performance from a node's viewpoint. Furthermore, we identified an interesting complementarity property of genes; this property exploits the local nature of sensor networks and moves the burden of excessive communication from normally behaving nodes to misbehaving nodes. These results have a direct impact on the design of AIS for sensor networks and on engineering of sensor networks.
|
Title: Transposable regularized covariance models with an application to missing data imputation
|
Abstract: Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable, meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal, in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so-called transposable regularized covariance models allow for maximum likelihood estimation of the mean and nonsingular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.
|
Title: Semiparametric modeling of autonomous nonlinear dynamical systems with applications
|
Abstract: In this paper, we propose a semi-parametric model for autonomous nonlinear dynamical systems and devise an estimation procedure for model fitting. This model incorporates subject-specific effects and can be viewed as a nonlinear semi-parametric mixed effects model. We also propose a computationally efficient model selection procedure. We prove consistency of the proposed estimator under suitable regularity conditions. We show by simulation studies that the proposed estimation as well as model selection procedures can efficiently handle sparse and noisy measurements. Finally, we apply the proposed method to a plant growth data used to study growth displacement rates within meristems of maize roots under two different experimental conditions.
|
Title: Finding Significant Subregions in Large Image Databases
|
Abstract: Images have become an important data source in many scientific and commercial domains. Analysis and exploration of image collections often requires the retrieval of the best subregions matching a given query. The support of such content-based retrieval requires not only the formulation of an appropriate scoring function for defining relevant subregions but also the design of new access methods that can scale to large databases. In this paper, we propose a solution to this problem of querying significant image subregions. We design a scoring scheme to measure the similarity of subregions. Our similarity measure extends to any image descriptor. All the images are tiled and each alignment of the query and a database image produces a tile score matrix. We show that the problem of finding the best connected subregion from this matrix is NP-hard and develop a dynamic programming heuristic. With this heuristic, we develop two index based scalable search strategies, TARS and SPARS, to query patterns in a large image repository. These strategies are general enough to work with other scoring schemes and heuristics. Experimental results on real image datasets show that TARS saves more than 87% query time on small queries, and SPARS saves up to 52% query time on large queries as compared to linear search. Qualitative tests on synthetic and real datasets achieve precision of more than 80%.
|
Title: Forest Garrote
|
Abstract: Variable selection for high-dimensional linear models has received a lot of attention lately, mostly in the context of l1-regularization. Part of the attraction is the variable selection effect: parsimonious models are obtained, which are very suitable for interpretation. In terms of predictive power, however, these regularized linear models are often slightly inferior to machine learning procedures like tree ensembles. Tree ensembles, on the other hand, lack usually a formal way of variable selection and are difficult to visualize. A Garrote-style convex penalty for trees ensembles, in particular Random Forests, is proposed. The penalty selects functional groups of nodes in the trees. These could be as simple as monotone functions of individual predictor variables. This yields a parsimonious function fit, which lends itself easily to visualization and interpretation. The predictive power is maintained at least at the same level as the original tree ensemble. A key feature of the method is that, once a tree ensemble is fitted, no further tuning parameter needs to be selected. The empirical performance is demonstrated on a wide array of datasets.
|
Title: The Statistical Analysis of fMRI Data
|
Abstract: In recent years there has been explosive growth in the number of neuroimaging studies performed using functional Magnetic Resonance Imaging (fMRI). The field that has grown around the acquisition and analysis of fMRI data is intrinsically interdisciplinary in nature and involves contributions from researchers in neuroscience, psychology, physics and statistics, among others. A standard fMRI study gives rise to massive amounts of noisy data with a complicated spatio-temporal correlation structure. Statistics plays a crucial role in understanding the nature of the data and obtaining relevant results that can be used and interpreted by neuroscientists. In this paper we discuss the analysis of fMRI data, from the initial acquisition of the raw data to its use in locating brain activity, making inference about brain connectivity and predictions about psychological or disease states. Along the way, we illustrate interesting and important issues where statistics already plays a crucial role. We also seek to illustrate areas where statistics has perhaps been underutilized and will have an increased role in the future.
|
Title: Two-Dimensional ARMA Modeling for Breast Cancer Detection and Classification
|
Abstract: We propose a new model-based computer-aided diagnosis (CAD) system for tumor detection and classification (cancerous v.s. benign) in breast images. Specifically, we show that (x-ray, ultrasound and MRI) images can be accurately modeled by two-dimensional autoregressive-moving average (ARMA) random fields. We derive a two-stage Yule-Walker Least-Squares estimates of the model parameters, which are subsequently used as the basis for statistical inference and biophysical interpretation of the breast image. We use a k-means classifier to segment the breast image into three regions: healthy tissue, benign tumor, and cancerous tumor. Our simulation results on ultrasound breast images illustrate the power of the proposed approach.
|
Title: How opinions are received by online communities: A case study on Amazon.com helpfulness votes
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.